Feed aggregator
USERID Syntax Warning in GoldenGate Migration Utility
When running the GoldenGate migration utility, which I presented in an previous blog post, you might encounter the following warning on USERID syntax:
WARNING: USERID/PASSWORD parameter is no longer supported and will be modified to use USERIDALIAS for the database credentials.
This is not really a surprise, since USERID is an old syntax that should not be used anymore. In fact, it is not even part of the latest versions of GoldenGate. Let’s see a very basic example of a replicat running in the GoldenGate 19c Classic Architecture.
GGSCI (ogg) 1> view params rep
REPLICAT rep
USERIDALIAS dbi_blog
MAP PDB1.APP_PDB1.SOURCE, PDB2.APP_PDB2.TARGET,
COLMAP (
COL_SOURCE_USER = COL_TARGET_USER,
COL_SOURCE_USERID = COL_TARGET_USERID
);
Nothing here should be a cause for concern, because the database connection is done with the USERIDALIAS syntax. Yet, when running the migration utility in dryrun mode, I get the following warning:
Migration of Extract E2T Completed Successfully.
Parameter file for REPLICAT REP has the following warnings:
WARNING: USERID/PASSWORD parameter is no longer supported and will be modified to use USERIDALIAS for the database credentials.
Migrating REPLICAT REP to http://oggma:port.
Parameter File rep.prm Saved Successfully.
Checkpoint File(s) Copied and Converted Successfully.
REPLICAT REP patched.
...
Migration Summary
Migration of Replicat REP ..............................: Successful
...
It is technically not an error, and the migration utility seems to have no problem migrating this replicat to the new Microservices Architecture. However, a question remains. Will USERID be replaced in the process ? Of course, we do not want all USERID occurrences to be replaced with USERIDALIAS, if these are not connection keywords.
Let’s run the migration utility, for real this time, to see what happens. The output of the migration utility is exactly the same as before. The process is migrated with a warning on the USERID syntax.
WARNING: USERID/PASSWORD parameter is no longer supported and will be modified to use USERIDALIAS for the database credentials.
And if we look at the migrated parameter file:
> grep USERID $OGG_DEPLOYMENT_HOME/etc/conf/ogg/rep.prm
USERIDALIAS dbi_blog
COL_SOURCE_USER = COL_TARGET_USER,
COL_SOURCE_USERID = COL_TARGET_USERID
In this specific case, despite the warning on USERID, the migration utility did not change the parameter file. But of course, if you get the warning, you should always check the migrated parameter file before restarting your GoldenGate processes:
> diff $OLD_OGG_HOME/dirprm/rep.prm $OGG_DEPLOYMENT_HOME/etc/conf/ogg/rep.prm
L’article USERID Syntax Warning in GoldenGate Migration Utility est apparu en premier sur dbi Blog.
Dctm – IDS Source 16.7.5 config.bin crash during execution
Around six months ago, I faced a confusing issue with IDS Source 16.7.5 where the “config.bin” executable always crashed when I tried to run it. The installation of IDS binaries itself completed successfully without any errors. However, the configurator, which is supposed to set up the required objects inside the Repository, consistently crashed.
1. Environment context and IDS upgradeThis Documentum environment had just been upgraded to 23.4. The next step was to upgrade the associated IDS component. The latest version of IDS compatible with recent Documentum versions is 16.7.5.
The execution of the “idsLinuxSuiteSetup.bin” installer properly extracted all binaries and deployed the WebCache application in its Tomcat server. To quickly verify that, you can check the version properties file and try starting/stopping the Tomcat instance of the IDS. On my side, there were no problems with that.
To verify the installed version of IDS and ensure that the configurator was also updated:
[dmadmin@cs-0 ~]$ cd $DM_HOME/webcache
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ cat version/version.properties
#Please don't remove this values
#Fri Oct 10 09:52:49 UTC 2025
INSTALLER_NAME=IDS
PRODUCT_VERSION=16.7.5
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ ls -l install/config.bin
-rwxrwxr-x 1 dmadmin dmadmin 54943847 Oct 19 2024 install/config.bin
[dmadmin@cs-0 webcache]$
The above confirms that WebCache was properly updated to version 16.7.5 on October 10. It also confirms that the “config.bin” is fairly recent (Q4 2024), i.e. much more recent that the old 16.7.4 file.
2. Running the IDS configurator in silentMy next step was therefore to execute the configurator, still in silent mode, as I have done for all previous IDS installations and configurations. I have not written a blog about IDS silent installation yet, but I have done so for several other components. For example, you can refer to this post for the latest one published.
The silent properties file for the IDS Source configurator is quite simple, as it only requires the Repository name to configure:
[dmadmin@cs-0 webcache]$ cat ${install_file}
### Silent installation response file for IDS configurator
INSTALLER_UI=silent
KEEP_TEMP_FILE=true
### Configuration parameters
DOC_BASE_NAME=REPO_01
[dmadmin@cs-0 webcache]$
Initially, I simply executed “config.bin“. Since it crashed and there was absolutely nothing in the logs, I had to run it again with the DEBUG flag enabled:
[dmadmin@cs-0 webcache]$ $DM_HOME/webcache/install/config.bin -DLOG_LEVEL=DEBUG -f ${install_file}
Preparing to install
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...
Launching installer...
Picked up JAVA_TOOL_OPTIONS: -Djdk.util.zip.disableZip64ExtraFieldValidation=true -Djava.locale.providers=COMPAT,SPI --add-exports=java.base/sun.security.provider=ALL-UNNAMED --add-exports=java.base/sun.security.pkcs=ALL-UNNAMED --add-exports=java.base/sun.security.x509=ALL-UNNAMED --add-exports=java.base/sun.security.util=ALL-UNNAMED --add-exports=java.base/sun.security.tools.keytool=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ echo $?
1
[dmadmin@cs-0 webcache]$
As shown above, the execution failed, as the return code was “1“. With DEBUG enabled and after checking the generated files, I found the following:
[dmadmin@cs-0 webcache]$ find . -type f -mmin -20 -ls
92475248907 380 -rw-rw-r-- 1 dmadmin dmadmin 384222 Oct 10 11:58 ./install/logs/install.log
92470810541 4 -rw-rw-r-- 1 dmadmin dmadmin 219 Oct 10 11:57 ./install/installer.properties
92475252084 12 -rwxrwxrwx 1 dmadmin dmadmin 10564 Oct 10 11:57 ./install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_10_2025_11_57_42.log
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ grep -iE "_E_|_F_|ERROR|WARN|FATAL" install/logs/install.log
TYPE ERROR_TYPE 0000000000000000 0 0 0
13:24:12,192 DEBUG [main] com.documentum.install.shared.common.error.DiException - null/dba/config/GR_REPO/webcache.ini (No such file or directory)
13:24:12,193 DEBUG [main] com.documentum.install.shared.common.error.DiException - null
13:24:12,194 DEBUG [main] com.documentum.install.shared.common.error.DiException - null/dba/config/REPO_01/webcache.ini (No such file or directory)
13:24:12,194 DEBUG [main] com.documentum.install.shared.common.error.DiException - null
13:24:12,199 DEBUG [main] com.documentum.install.shared.common.error.DiException - null/dba/config/GR_REPO/webcache.ini (No such file or directory)
13:24:12,199 DEBUG [main] com.documentum.install.shared.common.error.DiException - null
13:24:12,199 DEBUG [main] com.documentum.install.shared.common.error.DiException - null/dba/config/REPO_01/webcache.ini (No such file or directory)
13:24:12,199 DEBUG [main] com.documentum.install.shared.common.error.DiException - null
13:24:12,200 DEBUG [main] com.documentum.install.shared.common.error.DiException - null/dba/config/REPO_01/webcache.ini (No such file or directory)
13:24:12,200 DEBUG [main] com.documentum.install.shared.common.error.DiException - null
TYPE ERROR_TYPE 0000000000000000 0 0 0
[dmadmin@cs-0 webcache]$
The DEBUG logs above might make it look like the “$DOCUMENTUM” environment variable is missing, since it complains about “null/dba/xxx” not being found. However, that is not the issue. I checked all parameters and environment variables, and everything was configured correctly. In addition, Documentum had just been successfully upgraded from version 20.2 to 23.4 from start to finish, which confirmed that there was no problem with the OS or environment configuration. So I checked the second file:
[dmadmin@cs-0 webcache]$ cat install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_10_2025_11_57_42.log
__________________________________________________________________________
Fri Oct 10 11:57:50 UTC 2025
Free Memory: 15947 kB
Total Memory: 49152 kB
...
Summary
-------
Installation: Successful with errors.
8 Successes
0 Warnings
1 NonFatalErrors
0 FatalErrors
...
Custom Action: com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase
Status: ERROR
Additional Notes: ERROR - class com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase.install() runtime exception:
...
====================STDERR ENTRIES==================
RepositoryManager: Trying fallback repository location...
8. final log file name=$DM_HOME/webcache/install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_10_2025_11_57_42.log
java.lang.NumberFormatException: For input string: ""
at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:67)
at java.base/java.lang.Integer.parseInt(Integer.java:678)
at java.base/java.lang.Integer.parseInt(Integer.java:786)
at com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase.configureDocbase(DiWAWebcsConfigureDocbase.java:329)
at com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase.install(DiWAWebcsConfigureDocbase.java:202)
at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
at com.zerog.ia.installer.GhostDirectory.install(Unknown Source)
at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
at com.zerog.ia.installer.Installer.install(Unknown Source)
at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
at com.zerog.ia.installer.Main.main(Unknown Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at com.zerog.lax.LAX.launch(Unknown Source)
at com.zerog.lax.LAX.main(Unknown Source)
Execute Custom Code
class com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase.install() runtime exception:
java.awt.HeadlessException:
No X11 DISPLAY variable was set,
but this program performed an operation which requires it.
at java.desktop/java.awt.GraphicsEnvironment.checkHeadless(GraphicsEnvironment.java:164)
at java.desktop/java.awt.Window.<init>(Window.java:553)
at java.desktop/java.awt.Frame.<init>(Frame.java:428)
at java.desktop/java.awt.Frame.<init>(Frame.java:393)
at java.desktop/javax.swing.JFrame.<init>(JFrame.java:180)
at com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase.install(DiWAWebcsConfigureDocbase.java:215)
at com.zerog.ia.installer.actions.CustomAction.installSelf(Unknown Source)
at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
at com.zerog.ia.installer.GhostDirectory.install(Unknown Source)
at com.zerog.ia.installer.InstallablePiece.install(Unknown Source)
at com.zerog.ia.installer.Installer.install(Unknown Source)
at com.zerog.ia.installer.LifeCycleManager.consoleInstallMain(Unknown Source)
at com.zerog.ia.installer.LifeCycleManager.executeApplication(Unknown Source)
at com.zerog.ia.installer.Main.main(Unknown Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at com.zerog.lax.LAX.launch(Unknown Source)
at com.zerog.lax.LAX.main(Unknown Source)
Retrying Installables deferred in pass 0
Deferral retries done because:
There were no deferrals in the last pass.
8. final log file name=$DM_HOME/webcache/install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_10_2025_11_57_42.log
====================STDOUT ENTRIES==================
...
[dmadmin@cs-0 webcache]$
That log file appeared to indicate that a certain “Number” value might be missing (“NumberFormatException“). Without access to the IDS source code (and I always avoid de-compiling Documentum source files), it was impossible to determine exactly what was missing. There were no additional details in the logs, so in the end I had to reach out to OpenText support to find out what was causing the issue.
4. Root cause: missing value for TOMCAT_PORT_SELECTEDAfter several back-and-forth exchanges and around 12 days of waiting for a solution, I finally received confirmation that this was a bug in the IDS Source 16.7.5 software. This version is the first one deployed on Tomcat instead of WildFly, so it was somewhat expected that some issues might appear.
When installing the IDS Source binaries, the silent installation properties file requires you to define the port that Tomcat will use. This parameter is “USER_PORT_CHOICE=6677“. You can of course change the port if needed, but 6677 was the default port used with previous IDS versions running on WildFly, so I kept the same value when installing IDS 16.7.5 on Tomcat.
The bug is that even though this value is used correctly during the Tomcat installation step, it is not properly written into the properties file that the configuration process later relies on. The IDS Source “config.bin” looks for the file “$DM_HOME/webcache/scs_tomcat.properties” and reads the port from the “TOMCAT_PORT_SELECTED” parameter.
However, in IDS 16.7.5 this file is not updated during installation. As a result, the port value remains empty, which corresponds to the missing number referenced in the logs and causes the configurator to crash.
5. Fix: updating scs_tomcat.propertiesThe solution is fairly simple: manually update that file and run the configurator again. In my case, I used the HTTPS port 6679, since my Tomcat was already in SSL (6677 + 2 = 6679):
[dmadmin@cs-0 webcache]$ port=6679
[dmadmin@cs-0 webcache]$ sed -i "s,\(TOMCAT_PORT_SELECTED=\).*,\1${port}," $DM_HOME/webcache/scs_tomcat.properties
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ $DM_HOME/webcache/install/config.bin -DLOG_LEVEL=DEBUG -f ${install_file}
Preparing to install
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...
Launching installer...
Picked up JAVA_TOOL_OPTIONS: -Djdk.util.zip.disableZip64ExtraFieldValidation=true -Djava.locale.providers=COMPAT,SPI --add-exports=java.base/sun.security.provider=ALL-UNNAMED --add-exports=java.base/sun.security.pkcs=ALL-UNNAMED --add-exports=java.base/sun.security.x509=ALL-UNNAMED --add-exports=java.base/sun.security.util=ALL-UNNAMED --add-exports=java.base/sun.security.tools.keytool=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED
[dmadmin@cs-0 webcache]$
[dmadmin@cs-0 webcache]$ echo $?
0
[dmadmin@cs-0 webcache]$
As you can see above, the return code is now “0“, which indicates a successful execution. The logs generated during this new attempt are much cleaner, and there are no longer any exceptions or errors:
[dmadmin@cs-0 webcache]$ cat install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_22_2025_13_37_46.log
__________________________________________________________________________
Wed Oct 22 01:39:40 UTC 2025
Free Memory: 14800 kB
Total Memory: 49152 kB
...
Summary
-------
Installation: Successful.
9 Successes
0 Warnings
0 NonFatalErrors
0 FatalErrors
...
Custom Action: com.documentum.install.webcache.CustomActions.DiWAWebcsConfigureDocbase
Status: SUCCESSFUL
...
====================STDERR ENTRIES==================
RepositoryManager: Trying fallback repository location...
8. final log file name=$DM_HOME/webcache/install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_22_2025_13_37_46.log
Retrying Installables deferred in pass 0
Deferral retries done because:
There were no deferrals in the last pass.
8. final log file name=$DM_HOME/webcache/install/config_log/OpenText_Documentum_Interactive_Delivery_Services_Configuration_Install_10_22_2025_13_37_46.log
====================STDOUT ENTRIES==================
...
[dmadmin@cs-0 webcache]$
As mentioned earlier, this configurator is responsible for installing components inside the Repository. It creates the required IDS objects or updates them if they already exist. The DAR files were also successfully installed:
[dmadmin@cs-0 webcache]$ iapi $DOCBASE_NAME -Udmadmin -Pxxx << EOC
> ?,c,select r_object_id, r_modify_date, object_name from dmc_dar order by r_modify_date asc;
> EOC
OpenText Documentum iapi - Interactive API interface
Copyright (c) 2023. OpenText Corporation
All rights reserved.
Client Library Release 23.4.0000.0180
Connecting to Server using docbase REPO_01
[DM_SESSION_I_SESSION_START]info: "Session 011234568027fb88 started for user dmadmin."
Connected to OpenText Documentum Server running Release 23.4.0000.0143 Linux64.Oracle
1> 2>
r_object_id r_modify_date object_name
---------------- ------------------------- ---------------------------------
... ... ...
08123456800c99a5 10/22/2025 13:38:32 SCSDocApp
08123456800c99be 10/22/2025 13:38:58 SCSWorkflow
08123456800c99e1 10/22/2025 13:39:29 icmRating
(43 rows affected)
1>
[dmadmin@cs-0 webcache]$
However, I later discovered another small bug. The “scs_admin_config.product_version” attribute in the Repository was not updated correctly. Previously installed version was 16.7.4, so it’s unclear whether the configurator updated the value (with 16.7.4 still) or not at all. In any case, the stored product version was incorrect.
This value is used by IDS to verify version compatibility during execution. For example, you can see this version referenced during the End-to-End tests. Therefore, I had to update the value manually. To correct the issue:
[dmadmin@cs-0 webcache]$ iapi $DOCBASE_NAME -Udmadmin -Pxxx << EOC
> ?,c,select product_version from scs_admin_config;
> ?,c,update scs_admin_config object set product_version='16.7.5' where product_version='16.7.4';
> ?,c,select product_version from scs_admin_config;
> exit
> EOC
OpenText Documentum iapi - Interactive API interface
Copyright (c) 2023. OpenText Corporation
All rights reserved.
Client Library Release 23.4.0000.0180
Connecting to Server using docbase REPO_01
[DM_SESSION_I_SESSION_START]info: "Session 011234568027fd13 started for user dmadmin."
Connected to OpenText Documentum Server running Release 23.4.0000.0143 Linux64.Oracle
Session id is s0
API>
product_version
------------------------
16.7.4
(1 row affected)
API>
objects_updated
---------------
1
(1 row affected)
[DM_QUERY_I_NUM_UPDATE]info: "1 objects were affected by your UPDATE statement."
API>
product_version
------------------------
16.7.5
(1 row affected)
API> Bye
[dmadmin@cs-0 webcache]$
OpenText mentioned that both of these bugs should normally be fixed in a future update of the binaries. I have not checked in the last six months, but hopefully the issue has already been resolved. If not, at least you now have the information needed to fix it!
L’article Dctm – IDS Source 16.7.5 config.bin crash during execution est apparu en premier sur dbi Blog.
Commercial PostgreSQL distributions with TDE (3) Cybertec PostgreSQL EE (1) Setup
In the lasts posts in this series we’ve looked at Fujitsu’s distribution of PostgreSQL (here and here) and EnterpriseDB’s distribution of PostgreSQL (here and here) which both come with support for TDE (Transparent Data Encryption). A third player is Cybertec with it’s Cybertec PostgreSQL EE distribution of PostgreSQL and this is the distribution we’re looking at in this and the next post.
Cybertec provides free access to their repositories with the limitation of 1GB data per table. As with Fujitsu, the supported versions of Linux distributions are based RHEL (8,9 & 10) and SLES (15 & 16).
Installing Cybertec’s distribution of PostgreSQL is, the same as with Fujitsu and EnterpriseDB, just a matter of attaching the repository and installing the packages. Before I am going to do that I’ll disable the EnterpriseDB repositories for not running into any issues with those when installing another distribution of PostgreSQL:
[root@postgres-tde ~]$ dnf repolist
Updating Subscription Management repositories.
repo id repo name
enterprisedb-enterprise enterprisedb-enterprise
enterprisedb-enterprise-noarch enterprisedb-enterprise-noarch
enterprisedb-enterprise-source enterprisedb-enterprise-source
rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)
rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
[root@postgres-tde ~]$ dnf config-manager --disable enterprisedb-*
Updating Subscription Management repositories.
[root@postgres-tde ~]$ dnf repolist
Updating Subscription Management repositories.
repo id repo name
rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)
rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
[root@postgres-tde ~]$
Attaching the Cybertec repository for version 18 of PostgreSQL:
[root@postgres-tde ~]$ version=18
[root@postgres-tde ~]$ sudo tee /etc/yum.repos.d/cybertec-pg$version.repo <<EOF
[cybertec_pg$version]
name=CYBERTEC PostgreSQL $version repository for RHEL/CentOS \$releasever - \$basearch
baseurl=https://repository.cybertec.at/public/$version/redhat/\$releasever/\$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
[cybertec_common]
name=CYBERTEC common repository for RHEL/CentOS \$releasever - \$basearch
baseurl=https://repository.cybertec.at/public/common/redhat/\$releasever/\$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
EOF
[cybertec_pg18]
name=CYBERTEC PostgreSQL 18 repository for RHEL/CentOS $releasever - $basearch
baseurl=https://repository.cybertec.at/public/18/redhat/$releasever/$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
[cybertec_common]
name=CYBERTEC common repository for RHEL/CentOS $releasever - $basearch
baseurl=https://repository.cybertec.at/public/common/redhat/$releasever/$basearch
gpgkey=https://repository.cybertec.at/assets/cybertec-rpm.asc
enabled=1
[root@postgres-tde ~]$ dnf repolist
Updating Subscription Management repositories.
repo id repo name
cybertec_common CYBERTEC common repository for RHEL/CentOS 9 - x86_64
cybertec_pg18 CYBERTEC PostgreSQL 18 repository for RHEL/CentOS 9 - x86_64
rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)
rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
[root@postgres-tde ~]$
Let’s check what we have available:
[root@postgres-tde ~]$ dnf search postgresql18-ee
Updating Subscription Management repositories.
Last metadata expiration check: 0:00:10 ago on Mon 09 Mar 2026 09:33:05 AM CET.
================================================================================================== Name Exactly Matched: postgresql18-ee ===================================================================================================
postgresql18-ee.x86_64 : PostgreSQL client programs and libraries
================================================================================================= Name & Summary Matched: postgresql18-ee ==================================================================================================
postgresql18-ee-contrib-debuginfo.x86_64 : Debug information for package postgresql18-ee-contrib
postgresql18-ee-debuginfo.x86_64 : Debug information for package postgresql18-ee
postgresql18-ee-devel-debuginfo.x86_64 : Debug information for package postgresql18-ee-devel
postgresql18-ee-ecpg-devel-debuginfo.x86_64 : Debug information for package postgresql18-ee-ecpg-devel
postgresql18-ee-ecpg-libs-debuginfo.x86_64 : Debug information for package postgresql18-ee-ecpg-libs
postgresql18-ee-libs-debuginfo.x86_64 : Debug information for package postgresql18-ee-libs
postgresql18-ee-libs-oauth-debuginfo.x86_64 : Debug information for package postgresql18-ee-libs-oauth
postgresql18-ee-llvmjit-debuginfo.x86_64 : Debug information for package postgresql18-ee-llvmjit
postgresql18-ee-plperl-debuginfo.x86_64 : Debug information for package postgresql18-ee-plperl
postgresql18-ee-plpython3-debuginfo.x86_64 : Debug information for package postgresql18-ee-plpython3
postgresql18-ee-pltcl-debuginfo.x86_64 : Debug information for package postgresql18-ee-pltcl
postgresql18-ee-server-debuginfo.x86_64 : Debug information for package postgresql18-ee-server
====================================================================================================== Name Matched: postgresql18-ee =======================================================================================================
postgresql18-ee-contrib.x86_64 : Contributed source and binaries distributed with PostgreSQL
postgresql18-ee-devel.x86_64 : PostgreSQL development header files and libraries
postgresql18-ee-docs.x86_64 : Extra documentation for PostgreSQL
postgresql18-ee-ecpg-devel.x86_64 : Development files for ECPG (Embedded PostgreSQL for C)
postgresql18-ee-ecpg-libs.x86_64 : Run-time libraries for ECPG programs
postgresql18-ee-libs.x86_64 : The shared libraries required for any PostgreSQL clients
postgresql18-ee-libs-oauth.x86_64 : The shared libraries required for any PostgreSQL clients - OAuth flow
postgresql18-ee-llvmjit.x86_64 : Just-in-time compilation support for PostgreSQL
postgresql18-ee-plperl.x86_64 : The Perl procedural language for PostgreSQL
postgresql18-ee-plpython3.x86_64 : The Python3 procedural language for PostgreSQL
postgresql18-ee-pltcl.x86_64 : The Tcl procedural language for PostgreSQL
postgresql18-ee-server.x86_64 : The programs needed to create and run a PostgreSQL server
postgresql18-ee-test.x86_64 : The test suite distributed with PostgreSQL
This are the usual suspects, so for getting it installed:
[root@postgres-tde ~]$ dnf install postgresql18-ee-server postgresql18-ee postgresql18-ee-contrib
Updating Subscription Management repositories.
Last metadata expiration check: 0:00:29 ago on Mon 09 Mar 2026 10:30:17 AM CET.
Dependencies resolved.
============================================================================================================================================================================================================================================
Package Architecture Version Repository Size
============================================================================================================================================================================================================================================
Installing:
postgresql18-ee x86_64 18.3-EE~demo.rhel9.cybertec2 cybertec_pg18 2.0 M
postgresql18-ee-contrib x86_64 18.3-EE~demo.rhel9.cybertec2 cybertec_pg18 755 k
postgresql18-ee-server x86_64 18.3-EE~demo.rhel9.cybertec2 cybertec_pg18 7.2 M
Installing dependencies:
postgresql18-ee-libs x86_64 18.3-EE~demo.rhel9.cybertec2 cybertec_pg18 299 k
Transaction Summary
============================================================================================================================================================================================================================================
Install 4 Packages
Total download size: 10 M
Installed size: 46 M
Is this ok [y/N]: y
Downloading Packages:
(1/4): postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm 1.4 MB/s | 299 kB 00:00
(2/4): postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm 3.1 MB/s | 755 kB 00:00
(3/4): postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm 6.8 MB/s | 2.0 MB 00:00
(4/4): postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64.rpm 13 MB/s | 7.2 MB 00:00
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 13 MB/s | 10 MB 00:00
CYBERTEC PostgreSQL 18 repository for RHEL/CentOS 9 - x86_64 42 kB/s | 3.1 kB 00:00
Importing GPG key 0x2D1B5F59:
Userid : "Cybertec International (Software Signing Key) <build@cybertec.at>"
Fingerprint: FCFF 012F 4B39 9019 1352 BB03 AA6F 3CC1 2D1B 5F59
From : https://repository.cybertec.at/assets/cybertec-rpm.asc
Is this ok [y/N]: y
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64 1/4
Running scriptlet: postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64 1/4
Installing : postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64 2/4
Running scriptlet: postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64 2/4
Running scriptlet: postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64 3/4
Installing : postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64 3/4
Running scriptlet: postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64 3/4
Installing : postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64 4/4
Running scriptlet: postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64 4/4
Verifying : postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64 1/4
Verifying : postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64 2/4
Verifying : postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64 3/4
Verifying : postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64 4/4
Installed products updated.
Installed:
postgresql18-ee-18.3-EE~demo.rhel9.cybertec2.x86_64 postgresql18-ee-contrib-18.3-EE~demo.rhel9.cybertec2.x86_64 postgresql18-ee-libs-18.3-EE~demo.rhel9.cybertec2.x86_64 postgresql18-ee-server-18.3-EE~demo.rhel9.cybertec2.x86_64
Complete!
… and that’s it. As with the other posts in this little series, we’ll have a look at how to start the instance and enable TDE in the next post.
L’article Commercial PostgreSQL distributions with TDE (3) Cybertec PostgreSQL EE (1) Setup est apparu en premier sur dbi Blog.
OGG-08048 after patching GoldenGate: explanations and solutions
When patching GoldenGate Classic Architecture, you might encounter an OGG-08048 error when restarting your extracts and replicats.
OGG-08048: Failed to initialize timezone information. Check location of ORACLE_HOME.
What should you do exactly, and how do you avoid this error in the future ? In fact, this error is easy to reproduce, which also makes it easy to avoid. Usually, it happens when following the official documentation instructions for patching. They always include a modification of the ORACLE_HOME variable.
export ORACLE_HOME=GoldenGate_Installation_Path
Where does this come from ?
This is a good practice to make sure the OPatch utility knows what to patch. However, it might cause issues when restarting. As a rule of thumb, the modified environment should only be used to patch and roll back your installation. You shouldn’t do any management tasks with it !
The most important thing to remember is that you should start the manager with the correct environment variables ! If the manager is already started, you might still have the error when starting the extracts. What this means if that correcting your environment variables after restarting the manager might not solve the issue !
With this explanation, you now understand why rolling back the patch will not solve the OGG-08048. The rollback will work, but you will not be able to restart the extracts !
OGG-08048 error ?
If you have an OGG-08048 error when starting GoldenGate processes:
- If your environment is patched, do not attempt to rollback. Just load your usual GoldenGate environment, restart the manager and attempt to restart the processes.
- If you already rolled back the patch, you can apply it again. Then, follow the steps described above: load a standard GoldenGate environment, restart the manager and the GoldenGate processes.
And in the future, remember to always use your classic environment to manage your installation before and after applying a GoldenGate patch. To make it safer, I would suggest using separate sessions to avoid any confusion.
And after patching your GoldenGate classic architecture setup, you should definitely consider upgrading to GoldenGate 26ai, using the migration utility.
L’article OGG-08048 after patching GoldenGate: explanations and solutions est apparu en premier sur dbi Blog.
DBA technologies.
Get hashed password for old user
Dealing with 100000000+ rows of data.
Secure Application Role Disable Role
Locked user.
Oracle error -1843 "not a valid month"
memoptimize for read and triggers
Dctm – Upgrade from 23.4 to 25.4 fails with DM_SERVER_E_SOCKOPT
Since its release in Q4 2025, I have worked on several upgrades from 23.4 to 25.4. The first one I worked on was for a customer using our own custom Documentum images. If you have followed my posts for some time, you might recall a previous blog post where I discussed a Documentum 20.2 to 23.4 upgrade and some issues related to IPv6 being disabled (c.f. this blog).
1. Environment detailsThe source Documentum 23.4 environment was configured exactly like the one from that previous blog post, with the fix to ensure the components could start and run on IPv4 only. Using the exact same source code, I simply rebuilt the image with the same OS version (RHEL8 for Dctm 23.4). One difference was that I was previously running on a vanilla Kubernetes cluster, while for this blog I used RKE2 (from SUSE), meaning the OS and Kubernetes cluster were not the same.
At the Documentum level, the Connection Broker was configured with “host=${Current-IPv4}” to force it to start with IPv4 only. The Repository had “ip_mode = V4ONLY” in its “server.ini” to achieve the same behavior. With these two configurations, the Documentum 23.4 environment was installed from scratch without any issues and was running properly. I performed several pod restarts over the next few days, and everything was running smoothly.
2. Upgrade to Documentum 25.4Then it was time to upgrade the environment to 25.4. For that purpose, I built a new image on RHEL9 (switching from ubi8 to ubi9 for the base image). I had no problems with the preparation or the installation of the other binaries.
I then triggered the upgrade process, which completed successfully for the Connection Broker. However, when it reached the Repository part, things were not that simple. The Repository upgrade log file contained the following:
[dmadmin@cs-0 ~]$ cat $DM_HOME/install/logs/install.log
20:30:29,056 INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Log is ready and is set to the level - INFO
20:30:29,060 INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product name is: UniversalServerConfigurator
20:30:29,060 INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - The product version is: 25.4.0000.0143
20:30:29,060 INFO [main] -
20:30:29,089 INFO [main] com.documentum.install.shared.installanywhere.actions.InitializeSharedLibrary - Done InitializeSharedLibrary ...
20:30:29,117 INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentCheckVaultStatus - Checking the vault status: Silent mode
20:30:29,117 INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentCheckVaultStatus - Checking whether vault enabled or not
20:30:29,121 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerInformation - Setting CONFIGURE_DOCBROKER value to TRUE for SERVER
20:30:29,121 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerInformation - Setting CONFIGURE_DOCBASE value to TRUE for SERVER
20:30:30,124 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckEnvrionmentVariable - The installer was started using the dm_launch_server_config_program.sh script.
20:30:30,124 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckEnvrionmentVariable - The installer will determine the value of environment variable DOCUMENTUM.
20:30:30,124 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckEnvrionmentVariable - existingVersion : 25.4serverMajorversion : 25.4
20:30:33,125 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckEnvrionmentVariable - The installer will determine the value of environment variable PATH.
20:30:33,125 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckEnvrionmentVariable - existingVersion : 25.4serverMajorversion : 25.4
20:30:36,126 INFO [main] - existingVersion : 25.4serverMajorversion : 25.4
20:30:36,136 INFO [main] com.documentum.install.server.installanywhere.actions.DiWASilentConfigurationInstallationValidation - Start to validate docbase parameters.
20:30:36,140 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerPatchExistingDocbaseAction - The installer will obtain all the DOCBASE on the machine.
20:30:38,146 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerDocAppFolder - The installer will obtain all the DocApps which could be installed for the repository.
20:30:38,148 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerLoadDocBaseComponentInfo - The installer will gather information about the component GR_REPO.
20:30:41,151 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerReadServerIniForLockbox - Lockbox disabled for the repository : GR_REPO.
20:30:41,153 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeepAEKUnchanged - vaule of isVaultEnabledinPrevious : null
20:30:41,156 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeystoreStatusForOld - The installer will check old AEK key status.
20:30:41,219 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeystoreStatus - Executed dm_crypto_create -check command and the return code - 1
20:30:41,221 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerLoadValidAEKs - AEK key type provided in properties is : Local
20:30:41,266 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeystoreStatus - Executed dm_crypto_create -check command and the return code - 1
20:30:41,269 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerEnableLockBoxValidation - The installer will validate AEK fields.
20:30:41,272 INFO [main] - Is Vault enabled :false
20:30:41,273 INFO [main] - Is Vault enabled :false
20:30:41,326 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerValidateLockboxPassphrase - Installer will boot AEK key
20:31:11,376 INFO [main] - Is Vault enabled :false
20:31:11,377 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeystoreStatus - The installer will check keystore status.
20:31:11,419 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeystoreStatus - Executed dm_crypto_create -check command and the return code - 1
20:31:11,419 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckKeystoreStatus - The installer detected the keystore already exists and was created using user password.
20:31:11,425 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerQueryDatabaseInformation - The installer is gathering database connection information from the local machine.
20:31:11,427 INFO [main] com.documentum.install.appserver.services.DiAppServerUtil - appServer == null
20:31:11,427 INFO [main] com.documentum.install.appserver.services.DiAppServerUtil - isTomcatInstalled == true -- tomcat version is null
20:31:11,431 INFO [main] com.documentum.install.appserver.tomcat.TomcatApplicationServer - setApplicationServer sharedDfcLibDir is:$DOCUMENTUM/dfc
20:31:11,431 INFO [main] com.documentum.install.appserver.tomcat.TomcatApplicationServer - getFileFromResource for templates/appserver.properties
20:31:11,434 INFO [main] com.documentum.install.appserver.tomcat.TomcatApplicationServer - Tomcat Home = $DOCUMENTUM/tomcat
20:31:11,438 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerModifyDocbaseDirectory - The installer will create the folder structure for repository GR_REPO.
20:31:11,440 INFO [main] - The installer will stop component process for GR_REPO.
20:31:11,479 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerUpdateTCSUnixServiceFile - The installer will check service entries for repository GR_REPO.
20:31:11,483 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerModifyDfcProperties - The installer will update dfc.properties file.
20:31:11,485 INFO [main] com.documentum.install.shared.common.services.dfc.DiDfcProperties - Installer is not adding connection broker information as it is already added.
20:31:13,488 INFO [main] - The installer will update server.ini file for the repository.
20:31:13,493 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerDataIniGenerator - The installer will create data_dictionary.ini for the repository.
20:31:13,496 INFO [main] - The installer will obtain database server name for database dctmdb.
20:31:13,497 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerLoadServerWebcacheInfo - The installer will obtain database information of dctmdb.
20:31:13,498 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerWebCacheIniGenerator - The installer will update webcache.ini file for the repository.
20:31:13,503 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerTestDatabaseConnection4Docbase_Upgrade - The installer is testing the database connection information
20:31:13,503 INFO [main] com.documentum.install.server.common.services.db.DiServerDbSvrOracleServer - The installer is validating the database version is supported.
20:31:13,638 INFO [main] com.documentum.install.server.common.services.db.DiServerDbSvrOracleServer - The installer is validating the database connection information in the server.ini file.
20:31:13,910 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckIfDMSUpgraded - Check if upgrade of DMS tables is needed for current docbase.
20:31:13,911 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerCheckIfDMSUpgraded - The current repository doesn't need to upgrade DMS table.
20:31:13,922 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerUpdateDocbaseServiceScript - The installer will update start script for repository GR_REPO.
20:31:13,929 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerUpdateDocbaseServiceScript - The installer will update stop script for repository GR_REPO.
20:31:13,937 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerUpgradeAEKUtility - will not execute start and stop services
20:31:13,944 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAServerUpgradeAEKUtility - will not execute start and stop services
20:31:13,947 INFO [main] - The installer will start component process for GR_REPO.
20:31:14,997 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
20:31:16,005 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
20:31:17,015 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
20:31:18,024 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
20:31:19,030 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
20:31:20,038 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
20:31:21,045 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
20:31:22,053 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
...
20:31:43,060 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
20:31:44,066 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
...
20:32:10,073 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
20:32:11,080 INFO [main] com.documentum.install.server.installanywhere.actions.DiWAUnixServiceControl - logPath is $DOCUMENTUM/dba/log/GR_REPO.log
[dmadmin@cs-0 ~]$
Everything looked fine at the beginning of the log file, at least until the start of the Repository process. After waiting a little less than one minute, there was still nothing happening, which was not normal. Therefore, I checked the OS processes and found none for the Repository “GR_REPO“:
[dmadmin@cs-0 ~]$ ps uxf | grep GR_REPO
dmadmin 2233289 0.0 0.0 3348 1824 pts/0 S+ 20:32 0:00 \_ grep --color=auto GR_REPO
[dmadmin@cs-0 ~]$
Therefore, something happened to the Repository process, which means that the upgrade process failed or will remain stuck in that loop. When checking the Repository log file, I saw the following:
[dmadmin@cs-0 dba]$ cat $DOCUMENTUM/dba/log/GR_REPO.log
OpenText Documentum Content Server (version 25.4.0000.0143 Linux64.Oracle)
Copyright (c) 2025. OpenText Corporation
All rights reserved.
2026-02-05T20:31:15.702051 2235323[2235323] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase GR_REPO attempting to open"
2026-02-05T20:31:15.802801 2235323[2235323] 0000000000000000 [DM_SERVER_I_START_KEY_STORAGE_MODE]info: "Docbase GR_REPO is using database for cryptographic key storage"
2026-02-05T20:31:15.802878 2235323[2235323] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase GR_REPO process identity: user(dmadmin)"
2026-02-05T20:31:16.073327 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Post Upgrade Processing."
2026-02-05T20:31:16.073843 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Base Types."
2026-02-05T20:31:16.074476 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize dmRecovery."
2026-02-05T20:31:16.089286 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize dmACL."
...
2026-02-05T20:31:17.886348 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Acs Config List."
2026-02-05T20:31:17.886487 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize dmLiteSysObject."
2026-02-05T20:31:17.886957 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize dmBatchManager."
2026-02-05T20:31:17.891417 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Partition Scheme."
2026-02-05T20:31:17.891883 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Critical Event Registry."
2026-02-05T20:31:17.891947 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Transaction Tracking Event Registry."
2026-02-05T20:31:17.892014 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Initialze External User Event Set."
2026-02-05T20:31:17.893580 2235323[2235323] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Authentication Plugins."
2026-02-05T20:31:17.895070 2235323[2235323] 0000000000000000 [DM_SESSION_I_AUTH_PLUGIN_LOADED]info: "Loaded Authentication Plugin with code 'dm_krb' ($DOCUMENTUM/dba/auth/libkerberos.so)."
2026-02-05T20:31:17.895090 2235323[2235323] 0000000000000000 [DM_SESSION_I_AUTH_PLUGIN_LOAD_INIT]info: "Authentication plugin ( 'dm_krb' ) was disabled. This is expected if no keytab file(s) at location ($DOCUMENTUM/dba/auth/kerberos).Please refer the content server installation guide."
2026-02-05T20:31:17.896397 2235323[2235323] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase GR_REPO opened"
2026-02-05T20:31:17.896463 2235323[2235323] 0000000000000000 [DM_SERVER_I_SERVER]info: "Setting exception handlers to catch all interrupts"
2026-02-05T20:31:17.896475 2235323[2235323] 0000000000000000 [DM_SERVER_I_START]info: "Starting server using service name: GR_REPO"
2026-02-05T20:31:17.933282 2235323[2235323] 0000000000000000 [DM_LICENSE_E_NO_LICENSE_CONFIG]error: "Could not find dm_otds_license_config object."
2026-02-05T20:31:17.998180 2235323[2235323] 0000000000000000 [DM_SERVER_I_LAUNCH_MTHDSVR]info: "Launching Method Server succeeded."
2026-02-05T20:31:17.999298 2235323[2235323] 0000000000000000 [DM_SERVER_E_SOCKOPT]error: "setsockopt failed for (SO_REUSEADDR (client)) with error (88)"
[dmadmin@cs-0 dba]$
The Repository log file was also quite clean. Everything at the beginning was as expected, until the very last line. The Repository was about to become available when suddenly a “DM_SERVER_E_SOCKOPT” error appeared and crashed the Repository process.
I tried to start the Repository again, but without success. It always ended up with the exact same error as the last line. It was my first time encountering that error “DM_SERVER_E_SOCKOPT“, so I checked the official documentation and the OpenText support website. But I had no luck there either; there was not a single reference to that error anywhere.
So, what should I do next? As usual, I just tried things that could make sense based on the information available.
The error refers to SOCKOPT, which I assumed meant socket options, so possibly something related to networking or communications. I checked the “setsockopt” documentation (c.f. this man page), which indeed seemed related to networking protocols. I also reviewed the “SO_REUSEADDR” documentation (c.f. this other man page), which refers to address binding apparently.
5. Fix & ResolutionWith that in mind, I remembered the IPv4 versus IPv6 issue I encountered back in 2024 and the blog post I wrote about it (linked earlier). Since there was no issue starting the Connection Broker with the “host=${Current-IPv4}” setting, I focused on the Repository configuration.
Therefore, I tried disabling what I had added for 23.4 to work. Specifically, I commented out the line “#ip_mode = V4ONLY” in “server.ini” so that it would return to the default value:
[dmadmin@cs-0 ~]$ grep ip_mode $DOCUMENTUM/dba/config/GR_REPO/server.ini
ip_mode = V4ONLY
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ sed -i 's/^\(ip_mode\)/#\1/' $DOCUMENTUM/dba/config/GR_REPO/server.ini
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ grep ip_mode $DOCUMENTUM/dba/config/GR_REPO/server.ini
#ip_mode = V4ONLY
[dmadmin@cs-0 ~]$
Then I tried starting the Repository again:
[dmadmin@cs-0 ~]$ $DOCUMENTUM/dba/dm_start_GR_REPO
starting Documentum server for repository: [GR_REPO]
with server log: [$DOCUMENTUM/dba/log/GR_REPO.log]
server pid: 2268601
[dmadmin@cs-0 ~]$
[dmadmin@cs-0 ~]$ cat $DOCUMENTUM/dba/log/GR_REPO.log
OpenText Documentum Content Server (version 25.4.0000.0143 Linux64.Oracle)
Copyright (c) 2025. OpenText Corporation
All rights reserved.
2026-02-05T20:57:34.417095 2268601[2268601] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase GR_REPO attempting to open"
2026-02-05T20:57:34.517792 2268601[2268601] 0000000000000000 [DM_SERVER_I_START_KEY_STORAGE_MODE]info: "Docbase GR_REPO is using database for cryptographic key storage"
2026-02-05T20:57:34.517854 2268601[2268601] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase GR_REPO process identity: user(dmadmin)"
2026-02-05T20:57:34.750986 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Post Upgrade Processing."
2026-02-05T20:57:34.751390 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Base Types."
2026-02-05T20:57:34.751810 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize dmRecovery."
2026-02-05T20:57:34.763115 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize dmACL."
...
2026-02-05T20:57:35.977776 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Acs Config List."
2026-02-05T20:57:35.977873 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize dmLiteSysObject."
2026-02-05T20:57:35.978265 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize dmBatchManager."
2026-02-05T20:57:35.981015 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Partition Scheme."
2026-02-05T20:57:35.981417 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Critical Event Registry."
2026-02-05T20:57:35.981479 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Transaction Tracking Event Registry."
2026-02-05T20:57:35.981543 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Initialze External User Event Set."
2026-02-05T20:57:35.983046 2268601[2268601] 0000000000000000 [DM_SESSION_I_INIT_BEGIN]info: "Initialize Authentication Plugins."
2026-02-05T20:57:35.984583 2268601[2268601] 0000000000000000 [DM_SESSION_I_AUTH_PLUGIN_LOADED]info: "Loaded Authentication Plugin with code 'dm_krb' ($DOCUMENTUM/dba/auth/libkerberos.so)."
2026-02-05T20:57:35.984604 2268601[2268601] 0000000000000000 [DM_SESSION_I_AUTH_PLUGIN_LOAD_INIT]info: "Authentication plugin ( 'dm_krb' ) was disabled. This is expected if no keytab file(s) at location ($DOCUMENTUM/dba/auth/kerberos).Please refer the content server installation guide."
2026-02-05T20:57:35.986125 2268601[2268601] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase GR_REPO opened"
2026-02-05T20:57:35.986189 2268601[2268601] 0000000000000000 [DM_SERVER_I_SERVER]info: "Setting exception handlers to catch all interrupts"
2026-02-05T20:57:35.986199 2268601[2268601] 0000000000000000 [DM_SERVER_I_START]info: "Starting server using service name: GR_REPO"
2026-02-05T20:57:36.016709 2268601[2268601] 0000000000000000 [DM_LICENSE_E_NO_LICENSE_CONFIG]error: "Could not find dm_otds_license_config object."
2026-02-05T20:57:36.067861 2268601[2268601] 0000000000000000 [DM_SERVER_I_LAUNCH_MTHDSVR]info: "Launching Method Server succeeded."
2026-02-05T20:57:36.073641 2268601[2268601] 0000000000000000 [DM_SERVER_I_LISTENING]info: "The server is listening on network address (Service Name: GR_REPO, Host Name: cs-0 :V4 IP)"
2026-02-05T20:57:36.082139 2268601[2268601] 0000000000000000 [DM_SERVER_I_LISTENING]info: "The server is listening on network address (Service Name: GR_REPO_s, Host Name: cs-0 :V4 IP)"
2026-02-05T20:57:37.135498 2268601[2268601] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent master (pid : 2268748, session 0101234580000007) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2026-02-05T20:57:37.136533 2268601[2268601] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 2268749, session 010123458000000a) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2026-02-05T20:57:38.137780 2268601[2268601] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 2268770, session 010123458000000b) is started sucessfully."
IsProcessAlive: Process ID 0 is not > 0
2026-02-05T20:57:39.139572 2268601[2268601] 0000000000000000 [DM_WORKFLOW_I_AGENT_START]info: "Workflow agent worker (pid : 2268823, session 010123458000000c) is started sucessfully."
2026-02-05T20:57:40.139741 2268601[2268601] 0000000000000000 [DM_SERVER_I_START]info: "Sending Initial Docbroker check-point "
2026-02-05T20:57:40.143121 2268601[2268601] 0000000000000000 [DM_MQ_I_DAEMON_START]info: "Message queue daemon (pid : 2268845, session 0101234580000456) is started sucessfully."
2026-02-05T20:57:42.758073 2268844[2268844] 0101234580000003 [DM_DOCBROKER_I_PROJECTING]info: "Sending information to Docbroker located on host (cs-0.cs.dctm-ns.svc.cluster.local) with port (1490). Information: (Config(GR_REPO), Proximity(1), Status(Open), Dormancy Status(Active))."
[dmadmin@cs-0 ~]$
As you can see, it is working again. This means that what I previously had to add for the 23.4 environment to work is now causing an issue on 25.4. Obviously the OS is not the same (both the container and the real host), and the Kubernetes environment is also different. Still, it is interesting that something fixing an issue in a specific version can introduce a problem in a newer version.
As shown in the logs, the Repository still starts on IPv4, as it clearly states: “The server is listening on network address (Service Name: GR_REPO, Host Name: cs-0 :V4 IP)“. However, it does not accept the setting “ip_mode = V4ONLY“, somehow.
That’s pretty incredible, don’t you think? Anyway, once the Repository processes were running, I was able to restart the upgrade process properly by ensuring that ip_mode was not specified for version 25.4.
L’article Dctm – Upgrade from 23.4 to 25.4 fails with DM_SERVER_E_SOCKOPT est apparu en premier sur dbi Blog.
How to patch your ODA to 19.30?
Patch 19.30 is now available for Oracle Database Appliance series. Let’s find out what’s new and how to apply this patch.
What’s new?The real new feature is the possibility to rollback a server component patch done by an odacli update-servercomponents. Until now, in the rare cases you would need to do a rollback, you could only rely on ODABR. And guess what? This new rollback feature makes use of ODABR. So keep doing manual ODABR snapshots prior attempting to patch your ODA.
Regarding the odacli update-dcscomponents, it now has a dedicated job engine you can query with a new command odacli describe-admin-job to see the progress. This is useful because this job now lasts longer. It’s always good to know the involved steps and their status of such a process.
This version is 19.30, meaning that bare metal GI stack is still using 19c binaries. 23/26ai databases are still limited to DB Systems, meaning that bare metal databases are limited to 19c.
As you can guess, this patch is mainly a bundle patch for security and bug fixes. A bigger update is expected later this year.
Which ODA is compatible with this 19.30 release?The latest ODAs X11-HA, X11-L and X11-S are supported, as well as X10, X9-2 and X8-2 series. X7-2 series and older ones are not supported anymore. If you own one from these older generations, you will not be able to patch it anymore. If you’re using X8-2 ODAs, available from late 2019 to mid 2022, the last patch is planned for August 2027.
I still recommend keeping your ODA 7 years, not less, not more. This blog post is still relevant today: https://www.dbi-services.com/blog/why-you-should-consider-keeping-your-oda-more-than-5-years/.
Is this patch a cumulative one?The rule is now well established: you can apply a patch on top of the four previous ones. 19.30 can then be applied on top of 19.29, 19.28, 19.27 and 19.26. Patching once a year will prevent having to apply 2 or more patches, meaning a longer downtime.
What’s new since 19.29 is the additional monthly system patch. This is a special patch for system only, for those who cannot wait for the global patch to be released. First one was 19.29.0.1.0. 19.30 can be applied on top of it.
In my lab at dbi services, I will use an ODA X8-2M running 19.29 with one DB home, one database and one DB System. The DB System is already running a 26ai database. The given patching procedure should be the same if you come from 19.26 or later.
Is there also a patch for my databases?Only databases version 19c are supported on bare metal. Patch for 19c is obviously 19.30. For a 26ai database running inside a DB System, you will patch from 23.26.0.0 to 23.26.1.0. This is how the new version numbering works now.
Download the patch and clone filesThese files are mandatory:
38776074 => the patch itself
30403673 => the GI clone needed for deploying newer 19c GI version
30403662 => the DB clone for deploying newer version of 19c
These files are optional:
30403643 => ISO file for reimaging, not needed for patching
36524660 => System image for deploying a new 26ai DB System
36524627 => the GI clone needed for deploying/patching to newer 26ai GI version
36524642 => the DB clone for deploying/patching to newer 26ai version
32451228 => The newer system image for 19c DB Systems
38776071 => The patch for 26ai DB Systems
Be sure to choose the very latest 19.30 when downloading some files, patch number is the same for older versions of GI clones, DB clones and ISO files.
Prepare the patchingBefore starting, please check these prerequisites:
- filesystems /, /opt, /u01 and /root have at least 20% of available free space
- unzip the downloaded patch files
- additional manually installed rpms must be removed
- revert profile scripts to default’s one (for grid and oracle users)
- make sure you’ve planned a sufficient downtime (4+ hours depending on the number of databases and DB Systems)
- do a sanity reboot before patching to kill zombie processes
- use ODABR to make snapshots of the important filesystems prior patching: this tool is now included in the software distribution
df -h / /u01 /opt/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot 30G 13G 16G 46% /
/dev/mapper/VolGroupSys-LogVolU01 59G 23G 34G 40% /u01
/dev/mapper/VolGroupSys-LogVolOpt 69G 34G 33G 52% /opt
cd /opt/dbi
for a in `ls p*1930*.zip` ; do unzip -o $a ; rm -f $a ; done
reboot
...
pvs
PV VG Fmt Attr PSize PFree
/dev/md126p3 VolGroupSys lvm2 a-- 446.09g 260.09g
/opt/odabr/odabr backup --snap
INFO: 2026-02-26 11:29:54: Please check the logfile '/opt/odabr/out/log/odabr_24079.log' for more details
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
odabr - ODA node Backup Restore - Version: 2.0.2-05
Copyright 2013, 2025, Oracle and/or its affiliates.
--------------------------------------------------------
RACPack, Cloud Innovation and Solution Engineering Team
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
INFO: 2026-02-26 11:29:54: Checking superuser
INFO: 2026-02-26 11:29:54: Checking Bare Metal
INFO: 2026-02-26 11:29:54: Removing existing LVM snapshots
WARNING: 2026-02-26 11:29:54: LVM snapshot for 'opt' does not exist
WARNING: 2026-02-26 11:29:54: LVM snapshot for 'u01' does not exist
WARNING: 2026-02-26 11:29:54: LVM snapshot for 'root' does not exist
INFO: 2026-02-26 11:29:54: Checking current OS version...
INFO: 2026-02-26 11:29:54: Checking LVM restore backgroud process
INFO: 2026-02-26 11:29:54: Checking LVM size
INFO: 2026-02-26 11:29:54: Boot device backup
INFO: 2026-02-26 11:29:54: Getting EFI device
INFO: 2026-02-26 11:29:54: ...step1 - unmounting EFI
INFO: 2026-02-26 11:29:54: ...step2 - making efi device backup
SUCCESS: 2026-02-26 11:29:57: ...EFI device backup saved as '/opt/odabr/out/hbi/efi.img'
INFO: 2026-02-26 11:29:57: ...step3 - checking EFI device backup
INFO: 2026-02-26 11:29:57: Getting boot device
INFO: 2026-02-26 11:29:57: ...step1 - making boot device backup using tar
SUCCESS: 2026-02-26 11:30:03: ...boot content saved as '/opt/odabr/out/hbi/boot.tar.gz'
INFO: 2026-02-26 11:30:03: ...step2 - unmounting boot
INFO: 2026-02-26 11:30:04: ...step3 - making boot device backup using dd
SUCCESS: 2026-02-26 11:30:09: ...boot device backup saved as '/opt/odabr/out/hbi/boot.img'
INFO: 2026-02-26 11:30:09: ...step4 - mounting boot
INFO: 2026-02-26 11:30:09: ...step5 - mounting EFI
INFO: 2026-02-26 11:30:10: ...step6 - checking boot device backup
INFO: 2026-02-26 11:30:10: Making OCR physical backup
INFO: 2026-02-26 11:30:11: ...ocr backup saved as '/opt/odabr/out/hbi/ocrbackup_24079.bck'
SUCCESS: 2026-02-26 11:30:11: OCR physical backup created successfully
INFO: 2026-02-26 11:30:11: OCR export backup
INFO: 2026-02-26 11:30:12: ...ocr export saved as '/opt/odabr/out/hbi/ocrexport_24079.bck'
SUCCESS: 2026-02-26 11:30:12: OCR export backup created successfully
INFO: 2026-02-26 11:30:12: Saving clusterware patch level as '/opt/odabr/out/hbi/clusterware_patch_level.info'
SUCCESS: 2026-02-26 11:30:12: Clusterware patch level saved successfully
INFO: 2026-02-26 11:30:12: Making LVM snapshot backup
SUCCESS: 2026-02-26 11:30:13: ...snapshot backup for 'opt' created successfully
SUCCESS: 2026-02-26 11:30:14: ...snapshot backup for 'u01' created successfully
SUCCESS: 2026-02-26 11:30:14: ...snapshot backup for 'root' created successfully
SUCCESS: 2026-02-26 11:30:14: LVM snapshots backup done successfully
Version precheck
Start to check the current version of the various components:
odacli describe-component
System Version
--------------
19.29.0.0.0
System Node Name
----------------
dbioda01
Local System Version
--------------------
19.29.0.0.0
Component Installed Version Available Version
---------------------------------------- -------------------- --------------------
OAK 19.29.0.0.0 up-to-date
GI 19.29.0.0.251021 up-to-date
DB {
OraDB19000_home9 19.29.0.0.251021 up-to-date
[CPROD19]
}
DCSCONTROLLER 19.29.0.0.0 up-to-date
DCSCLI 19.29.0.0.0 up-to-date
DCSAGENT 19.29.0.0.0 up-to-date
DCSADMIN 19.29.0.0.0 up-to-date
OS 8.10 up-to-date
ILOM 5.1.5.22.r165351 up-to-date
BIOS 52160100 up-to-date
LOCAL CONTROLLER FIRMWARE {
[c4] 8000D9AB up-to-date
}
SHARED CONTROLLER FIRMWARE {
[c0, c1] VDV1RL06 up-to-date
}
LOCAL DISK FIRMWARE {
[c2d0, c2d1] XC311132 up-to-date
}
HMP 2.4.10.1.600 up-to-date
List the DB homes, databases, DB Systems and VMs:
odacli list-dbhomes
ID Name DB Version DB Edition Home Location Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb OraDB19000_home9 19.29.0.0.251021 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_9 CONFIGURED
odacli list-databases
ID DB Name DB Type DB Version CDB Class Edition Shape Storage Status DB Home ID
---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ----------------------------------------
976a80f2-4653-469f-8cd4-ddc1a21aff51 CPROD19 SI 19.29.0.0.251021 true OLTP EE odb8 ASM CONFIGURED 57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb
odacli list-dbsystems
Name Shape GI version DB info Status Created Updated
-------------------- ---------- ------------------ ------------------------------ ---------------------- ------------------------ ------------------------
dbs-04-tst dbs2 23.26.0.0.0 23.26(CONFIGURED=1) CONFIGURED 2026-01-12 10:14:46 CET 2026-01-12 10:45:45 CET
odacli list-vms
No data found for resource VM.
Update the DCS components
Updating DCS components is the first step, after registering the patch file:
odacli update-repository -f /opt/dbi/oda-sm-19.30.0.0.0-260210-server.zip
sleep 30 ; odacli describe-job -i e3ba068f-01db-45c3-949d-b79f43c8d6b7
Job details
----------------------------------------------------------------
ID: e3ba068f-01db-45c3-949d-b79f43c8d6b7
Description: Repository Update
Status: Success
Created: February 26, 2026 11:30:48 CET
Message: /opt/dbi/oda-sm-19.30.0.0.0-260210-server.zip
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Unzip bundle February 26, 2026 11:30:49 CET February 26, 2026 11:31:10 CET Success
odacli describe-component
System Version
--------------
19.29.0.0.0
System Node Name
----------------
dbioda01
Local System Version
--------------------
19.29.0.0.0
Component Installed Version Available Version
---------------------------------------- -------------------- --------------------
OAK 19.29.0.0.0 19.30.0.0.0
GI 19.29.0.0.251021 19.30.0.0.260120
DB {
OraDB19000_home9 19.29.0.0.251021 19.30.0.0.260120
[CPROD19]
}
DCSCONTROLLER 19.29.0.0.0 19.30.0.0.0
DCSCLI 19.29.0.0.0 19.30.0.0.0
DCSAGENT 19.29.0.0.0 19.30.0.0.0
DCSADMIN 19.29.0.0.0 19.30.0.0.0
OS 8.10 up-to-date
ILOM 5.1.5.22.r165351 5.1.5.29.r167438
BIOS 52160100 52170100
LOCAL CONTROLLER FIRMWARE {
[c4] 8000D9AB up-to-date
}
SHARED CONTROLLER FIRMWARE {
[c0, c1] VDV1RL06 up-to-date
}
LOCAL DISK FIRMWARE {
[c2d0, c2d1] XC311132 up-to-date
}
HMP 2.4.10.1.600 up-to-date
Let’s update the DCS components to 19.30:
odacli update-dcsadmin -v 19.30.0.0.0
sleep 90 ; odacli describe-job -i "c3c278ad-89e6-4c7d-b9fd-27833a187e43"
Job details
----------------------------------------------------------------
ID: c3c278ad-89e6-4c7d-b9fd-27833a187e43
Description: DcsAdmin patching to 19.30.0.0.0
Status: Success
Created: February 26, 2026 11:32:11 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Dcs-admin upgrade February 26, 2026 11:32:11 CET February 26, 2026 11:32:21 CET Success
Ping DCS Admin February 26, 2026 11:32:21 CET February 26, 2026 11:33:29 CET Success
sleep 30 ; odacli update-dcscomponents -v 19.30.0.0.0
sleep 300 ; odacli describe-admin-job -i 2aeda3f3-df4d-4f7c-a0ce-b57eeab0448b
Job details
----------------------------------------------------------------
ID: 2aeda3f3-df4d-4f7c-a0ce-b57eeab0448b
Description: Update-dcscomponents to 19.30.0.0.0
Status: Success
Created: February 26, 2026 11:34:51 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Pre-checks for update DCS components February 26, 2026 11:34:57 CET February 26, 2026 11:35:05 CET Success
Update DCS components February 26, 2026 11:35:05 CET February 26, 2026 11:35:05 CET Success
Stop DCS-Agent February 26, 2026 11:35:05 CET February 26, 2026 11:35:05 CET Success
Update MySql February 26, 2026 11:35:05 CET February 26, 2026 11:35:42 CET Success
Apply metadata schema changes February 26, 2026 11:35:42 CET February 26, 2026 11:35:42 CET Success
Modify MySQL Metadata February 26, 2026 11:35:42 CET February 26, 2026 11:35:43 CET Success
Update DCS-Agent February 26, 2026 11:35:43 CET February 26, 2026 11:35:57 CET Success
Update DCS-Cli February 26, 2026 11:35:57 CET February 26, 2026 11:35:59 CET Success
Update DCS-Controller February 26, 2026 11:35:59 CET February 26, 2026 11:36:22 CET Success
Update AHF RPM February 26, 2026 11:36:22 CET February 26, 2026 11:38:41 CET Success
Reset Keystore password February 26, 2026 11:38:41 CET February 26, 2026 11:39:02 CET Success
Update HAMI February 26, 2026 11:39:02 CET February 26, 2026 11:39:54 CET Success
Remove old library files February 26, 2026 11:39:54 CET February 26, 2026 11:39:54 CET Success
Post DCS update actions February 26, 2026 11:39:54 CET February 26, 2026 11:39:54 CET Success
System patching
Let’s do the prepatching of the system:
odacli create-prepatchreport -sc -v 19.30.0.0.0
sleep 180 ; odacli describe-prepatchreport -i c06cd4d0-d30c-4063-a3c1-3b86db6625b0
Prepatch Report
------------------------------------------------------------------------
Job ID: c06cd4d0-d30c-4063-a3c1-3b86db6625b0
Description: Patch pre-checks for [OS, ILOM, ORACHKSERVER, SERVER] to 19.30.0.0.0
Status: SUCCESS
Created: February 26, 2026 11:40:17 AM CET
Result: All pre-checks succeeded
Node Name
---------------
dbioda01
Pre-Check Status Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions Success Validated minimum supported versions.
Validate patching tag Success Validated patching tag: 19.30.0.0.0.
Is patch location available Success Patch location is available.
Verify All OS patches Success No dependencies found for RPMs being
removed, updated and installed. Check
/opt/oracle/dcs/log/jobfiles/
dnfdryrunout_2026-02-26_11-40-
34.0688_236.log file for more details
Validate command execution Success Validated command execution
__ILOM__
Validate ILOM server reachable Success Successfully connected with ILOM
server using public IP and USB
interconnect
Validate supported versions Success Validated minimum supported versions.
Validate patching tag Success Validated patching tag: 19.30.0.0.0.
Is patch location available Success Patch location is available.
Checking Ilom patch Version Success Successfully verified the versions
Patch location validation Success Successfully validated location
Validate command execution Success Validated command execution
__ORACHK__
Running orachk Success Successfully ran Orachk
Validate command execution Success Validated command execution
__SERVER__
Validate local patching Success Successfully validated server local
patching
Validate all KVM ACFS Success All KVM ACFS resources are running
resources are running
Validate DB System VM states Success All DB System VMs states are expected
Validate DB System AFD state Success All DB Systems are on required
versions
Validate command execution Success Validated command execution
OK let’s apply the system patch:
odacli update-servercomponents -v 19.30.0.0.0
...
The server will reboot at the end of the patching (it took 40 minutes on my X8-2M). Let’s then check the job:
odacli describe-job -i "8d5b4b63-3a92-46e4-b466-9fd46cdf8b3a"
Job details
----------------------------------------------------------------
ID: 8d5b4b63-3a92-46e4-b466-9fd46cdf8b3a
Description: Server Patching to 19.30.0.0.0
Status: Success
Created: February 26, 2026 11:42:47 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Validating GI user metadata February 26, 2026 11:42:53 CET February 26, 2026 11:42:54 CET Success
Modify BM udev rules February 26, 2026 11:42:54 CET February 26, 2026 11:43:05 CET Success
Validate ILOM server reachable February 26, 2026 11:42:54 CET February 26, 2026 11:42:54 CET Success
Stop oakd February 26, 2026 11:43:05 CET February 26, 2026 11:43:08 CET Success
Creating local repository February 26, 2026 11:43:09 CET February 26, 2026 11:43:11 CET Success
OSPatchBaseRepo
Updating versionlock plugin February 26, 2026 11:43:11 CET February 26, 2026 11:43:14 CET Success
Applying OS Patches February 26, 2026 11:43:14 CET February 26, 2026 11:50:26 CET Success
Applying HMP Patches February 26, 2026 11:50:27 CET February 26, 2026 11:50:30 CET Success
Creating local repository HMPPatchRepo February 26, 2026 11:50:27 CET February 26, 2026 11:50:27 CET Success
Oda-hw-mgmt upgrade February 26, 2026 11:50:31 CET February 26, 2026 11:51:02 CET Success
Patch location validation February 26, 2026 11:50:31 CET February 26, 2026 11:50:31 CET Success
Setting SELinux mode February 26, 2026 11:50:31 CET February 26, 2026 11:50:31 CET Success
Installing SQLcl software February 26, 2026 11:51:02 CET February 26, 2026 11:51:06 CET Success
OSS Patching February 26, 2026 11:51:02 CET February 26, 2026 11:51:02 CET Success
Applying Firmware local Disk Patches February 26, 2026 11:51:06 CET February 26, 2026 11:51:10 CET Success
Applying Firmware local Controller Patch February 26, 2026 11:51:10 CET February 26, 2026 11:51:14 CET Success
Applying Firmware shared Controller February 26, 2026 11:51:15 CET February 26, 2026 11:51:19 CET Success
Patch
Checking Ilom patch Version February 26, 2026 11:51:19 CET February 26, 2026 11:51:19 CET Success
Patch location validation February 26, 2026 11:51:19 CET February 26, 2026 11:51:19 CET Success
Disabling IPMI v2 February 26, 2026 11:51:20 CET February 26, 2026 11:51:21 CET Success
Save password in Wallet February 26, 2026 11:51:20 CET February 26, 2026 11:51:20 CET Success
Apply Ilom patch February 26, 2026 11:51:21 CET February 26, 2026 12:02:07 CET Success
Copying Flash Bios to Temp location February 26, 2026 12:02:07 CET February 26, 2026 12:02:07 CET Success
Start oakd February 26, 2026 12:02:08 CET February 26, 2026 12:02:24 CET Success
Add SYSNAME in Env February 26, 2026 12:02:25 CET February 26, 2026 12:02:25 CET Success
Cleanup JRE Home February 26, 2026 12:02:25 CET February 26, 2026 12:02:25 CET Success
Starting the clusterware February 26, 2026 12:02:25 CET February 26, 2026 12:03:59 CET Success
Update lvm.conf file February 26, 2026 12:04:00 CET February 26, 2026 12:04:00 CET Success
Generating and saving BOM February 26, 2026 12:04:01 CET February 26, 2026 12:04:34 CET Success
Update System full patch version February 26, 2026 12:04:01 CET February 26, 2026 12:04:01 CET Success
Update System rebootless patch version February 26, 2026 12:04:01 CET February 26, 2026 12:04:01 CET Success
PreRebootNode Actions February 26, 2026 12:04:34 CET February 26, 2026 12:07:10 CET Success
Reboot Node February 26, 2026 12:07:10 CET February 26, 2026 12:18:17 CET Success
GI patching
Let’s register the patch file, and do the precheck for GI:
odacli update-repository -f /opt/dbi/odacli-dcs-19.30.0.0.0-260210-GI-19.30.0.0.zip
sleep 70 ; odacli describe-job -i "5699a201-8f50-499c-98a8-18b2e79ca356"
Job details
----------------------------------------------------------------
ID: 5699a201-8f50-499c-98a8-18b2e79ca356
Description: Repository Update
Status: Success
Created: February 26, 2026 12:32:04 CET
Message: /opt/dbi/odacli-dcs-19.30.0.0.0-260210-GI-19.30.0.0.zip
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Unzip bundle February 26, 2026 12:32:05 CET February 26, 2026 12:33:14 CET Success
odacli create-prepatchreport -gi -v 19.30.0.0.0
sleep 180 ; odacli describe-prepatchreport -i ac157412-cfe2-4b2e-ae04-e7082cd4014f
Prepatch Report
------------------------------------------------------------------------
Job ID: ac157412-cfe2-4b2e-ae04-e7082cd4014f
Description: Patch pre-checks for [RHPGI, GI] to 19.30.0.0.0
Status: SUCCESS
Created: February 26, 2026 12:34:00 PM CET
Result: All pre-checks succeeded
Node Name
---------------
dbioda01
Pre-Check Status Comments
------------------------------ -------- --------------------------------------
__RHPGI__
Validate available space Success Validated free space under /u01
Evaluate GI patching Success Successfully validated GI patching
Validate command execution Success Validated command execution
__GI__
Validate GI metadata Success Successfully validated GI metadata
Validate supported GI versions Success Successfully validated minimum version
Is clusterware running Success Clusterware is running
Validate patching tag Success Validated patching tag: 19.30.0.0.0.
Is system provisioned Success Verified system is provisioned
Validate ASM is online Success ASM is online
Validate kernel log level Success Successfully validated the OS log
level
Validate Central Inventory Success oraInventory validation passed
Validate patching locks Success Validated patching locks
Validate clones location exist Success Validated clones location
Validate DB start dependencies Success DBs START dependency check passed
Validate DB stop dependencies Success DBs STOP dependency check passed
Validate space for clones Success Clones volume is already created
volume
Validate command execution Success Validated command execution
Let’s apply the GI update now:
odacli update-gihome -v 19.30.0.0.0
sleep 500 ; odacli describe-job -i "857bb637-82ec-4de9-a820-5ab9b895e9f8"
Job details
----------------------------------------------------------------
ID: 857bb637-82ec-4de9-a820-5ab9b895e9f8
Description: Patch GI with RHP to 19.30.0.0.0
Status: Success
Created: February 26, 2026 12:38:29 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Patch GI with RHP to 19.30.0.0.0 February 26, 2026 12:38:44 CET February 26, 2026 12:46:00 CET Success
Registering image February 26, 2026 12:38:45 CET February 26, 2026 12:38:45 CET Success
Registering working copy February 26, 2026 12:38:45 CET February 26, 2026 12:38:46 CET Success
Starting the clusterware February 26, 2026 12:38:45 CET February 26, 2026 12:38:45 CET Success
Creating GI home directories February 26, 2026 12:38:46 CET February 26, 2026 12:38:46 CET Success
Extract GI clone February 26, 2026 12:38:46 CET February 26, 2026 12:38:46 CET Success
Provisioning Software Only GI with RHP February 26, 2026 12:38:46 CET February 26, 2026 12:38:46 CET Success
Registering image February 26, 2026 12:38:46 CET February 26, 2026 12:38:46 CET Success
Patch GI with RHP February 26, 2026 12:39:20 CET February 26, 2026 12:45:25 CET Success
Set CRS ping target February 26, 2026 12:45:25 CET February 26, 2026 12:45:25 CET Success
Updating .bashrc February 26, 2026 12:45:25 CET February 26, 2026 12:45:26 CET Success
Updating GI home metadata February 26, 2026 12:45:26 CET February 26, 2026 12:45:26 CET Success
Updating GI home version February 26, 2026 12:45:26 CET February 26, 2026 12:45:31 CET Success
Updating All DBHome version February 26, 2026 12:45:31 CET February 26, 2026 12:45:36 CET Success
Starting the clusterware February 26, 2026 12:45:56 CET February 26, 2026 12:45:56 CET Success
Validate ACFS resources are running February 26, 2026 12:45:56 CET February 26, 2026 12:45:57 CET Success
Validate GI availability February 26, 2026 12:45:56 CET February 26, 2026 12:45:56 CET Success
Validate DB System VMs states February 26, 2026 12:45:57 CET February 26, 2026 12:45:58 CET Success
Patch CPU Pools distribution February 26, 2026 12:45:58 CET February 26, 2026 12:45:58 CET Success
Patch DB System domain config February 26, 2026 12:45:58 CET February 26, 2026 12:45:58 CET Success
Patch KVM CRS type February 26, 2026 12:45:58 CET February 26, 2026 12:45:58 CET Success
Patch VM vDisks CRS dependencies February 26, 2026 12:45:58 CET February 26, 2026 12:45:58 CET Success
Save custom VNetworks to storage February 26, 2026 12:45:58 CET February 26, 2026 12:45:59 CET Success
Add network filters to DB Systems February 26, 2026 12:45:59 CET February 26, 2026 12:46:00 CET Success
Create network filters February 26, 2026 12:45:59 CET February 26, 2026 12:45:59 CET Success
Patch DB Systems custom scale metadata February 26, 2026 12:46:00 CET February 26, 2026 12:46:00 CET Success
Patch DB Systems vDisks CRS dependencies February 26, 2026 12:46:00 CET February 26, 2026 12:46:00 CET Success
No reboot is needed for this patch.
Check the versionsodacli describe-component
System Version
--------------
19.30.0.0.0
System Node Name
----------------
dbioda01
Local System Version
--------------------
19.30.0.0.0
Component Installed Version Available Version
---------------------------------------- -------------------- --------------------
OAK 19.30.0.0.0 up-to-date
GI 19.30.0.0.260120 up-to-date
DB {
OraDB19000_home9 19.29.0.0.251021 19.30.0.0.260120
[CPROD19]
}
DCSCONTROLLER 19.30.0.0.0 up-to-date
DCSCLI 19.30.0.0.0 up-to-date
DCSAGENT 19.30.0.0.0 up-to-date
DCSADMIN 19.30.0.0.0 up-to-date
OS 8.10 up-to-date
ILOM 5.1.5.29.r167438 up-to-date
BIOS 52170100 up-to-date
LOCAL CONTROLLER FIRMWARE {
[c4] 8000D9AB up-to-date
}
SHARED CONTROLLER FIRMWARE {
[c0, c1] VDV1RL06 up-to-date
}
LOCAL DISK FIRMWARE {
[c2d0, c2d1] XC311132 up-to-date
}
HMP 2.4.10.1.600 up-to-date
Patching the storage
Patching the storage is only needed if describe-component tells you that you’re not up-to-date. On my X8-2M, it wasn’t needed. If your ODA need the storage patch, it’s easy:
odacli update-storage -v 19.30.0.0.0
odacli describe-job -i ...
The server will reboot once done.
Patching the DB homesIt’s now time to patch the DB home and the database on my ODA. Let’s first unzip and register the patch file in the repository:
odacli update-repository -f /opt/dbi/odacli-dcs-19.30.0.0.0-260210-DB-19.30.0.0.zip
sleep 60; odacli describe-job -i 5690c811-9030-427e-82f1-caeeba236329
Job details
----------------------------------------------------------------
ID: 5690c811-9030-427e-82f1-caeeba236329
Description: Repository Update
Status: Success
Created: February 26, 2026 12:54:57 CET
Message: /opt/dbi/odacli-dcs-19.30.0.0.0-260210-DB-19.30.0.0.zip
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Unzip bundle February 26, 2026 12:54:57 CET February 26, 2026 12:55:50 CET Success
odacli list-dbhomes
ID Name DB Version DB Edition Home Location Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb OraDB19000_home9 19.29.0.0.251021 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_9 CONFIGURED
Let’s check if the patch can be applied:
odacli create-prepatchreport -d -i 57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb -v 19.30.0.0.0
sleep 600; odacli describe-prepatchreport -i 3522c79c-7444-44d7-9422-9d1daab161d2
Prepatch Report
------------------------------------------------------------------------
Job ID: 3522c79c-7444-44d7-9422-9d1daab161d2
Description: Patch pre-checks for [DB, RHPDB, ORACHKDB] to 19.30.0.0.0: DbHome is OraDB19000_home9
Status: FAILED
Created: February 26, 2026 12:56:59 PM CET
Result: One or more pre-checks failed for [ORACHK, DB]
Node Name
---------------
dbioda01
Pre-Check Status Comments
------------------------------ -------- --------------------------------------
__DB__
Validate data corruption in Failed DCS-10315 - Patch described in My
patching Oracle Support Note KB867473 must be
applied.
Validate DB Home ID Success Validated DB Home ID:
57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb
Validate patching tag Success Validated patching tag: 19.30.0.0.0.
Is system provisioned Success Verified system is provisioned
Validate minimum agent version Success Validated minimum agent version
Is GI upgraded Success Validated GI is upgraded
Validate available space for Success Validated free space required under
db /u01
Validate there is usable Success Successfully validated Oracle Base
space under oracle base usable space
Validate glogin.sql file Success Successfully verified glogin.sql
won't break patching
Validate dbHomesOnACFS Success User has configured disk group for
configured Database homes on ACFS
Validate Oracle base Success Successfully validated Oracle Base
Is DB clone available Success Successfully validated clone file
exists
Validate command execution Success Validated command execution
__RHPDB__
Evaluate DBHome patching with Success Successfully validated updating
RHP dbhome with RHP. and local patching
is possible
Validate command execution Success Validated command execution
__ORACHK__
Running orachk Failed DCS-10702 - ORAchk validation failed:
.
Validate command execution Success Validated command execution
Verify the Fast Recovery Area Failed AHF-2929: FRA space management
(FRA) has reclaimable space problem file types are present
without an RMAN backup completion
within the last 7 days
I need to fix 2 problems. The first one is a bug that appeared in 19.29, let’s download and unzip the patch. Be careful because this patch is available for multiple versions: you will need the one for the version you’re currently using (19.29 in my case).
su - oracle
unzip -d /home/oracle /opt/dbi/p38854064_1929000DBRU_Linux-x86-64.zip
Archive: /opt/dbi/p38854064_1929000DBRU_Linux-x86-64.zip
creating: /home/oracle/38854064/
creating: /home/oracle/38854064/files/
creating: /home/oracle/38854064/files/lib/
creating: /home/oracle/38854064/files/lib/libserver19.a/
inflating: /home/oracle/38854064/files/lib/libserver19.a/kjfc.o
inflating: /home/oracle/38854064/README.txt
creating: /home/oracle/38854064/etc/
creating: /home/oracle/38854064/etc/config/
inflating: /home/oracle/38854064/etc/config/inventory.xml
inflating: /home/oracle/38854064/etc/config/actions.xml
inflating: /home/oracle/PatchSearch.xml
Let’s stop the database and apply this patch:
. oraenv <<< CPROD19
srvctl stop database -db CPROD19_S1
cd 38854064
$ORACLE_HOME/OPatch/opatch apply
Oracle Interim Patch Installer version 12.2.0.1.47
Copyright (c) 2026, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_9
Central Inventory : /u01/app/oraInventory
from : /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_9/oraInst.loc
OPatch version : 12.2.0.1.47
OUI version : 12.2.0.7.0
Log file location : /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_9/cfgtoollogs/opatch/opatch2026-02-26_14-29-22PM_1.log
Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 38854064
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_9')
Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '38854064' to OH '/u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_9'
Patching component oracle.rdbms, 19.0.0.0.0...
Patch 38854064 successfully applied.
Log file location: /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_9/cfgtoollogs/opatch/opatch2026-02-26_14-29-22PM_1.log
OPatch succeeded.
srvctl start database -db CPROD19_S1
Second problem is because my database doesn’t have a proper backup strategy, let’s then remove the useless archivelogs:
rman target /
delete force noprompt archivelog all;
exit;
exit
Now let’s retry the precheck:
odacli create-prepatchreport -d -i 57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb -v 19.30.0.0.0
odacli describe-prepatchreport -i b932011d-cfcf-402d-901e-5c7eac888f1f
Prepatch Report
------------------------------------------------------------------------
Job ID: b932011d-cfcf-402d-901e-5c7eac888f1f
Description: Patch pre-checks for [DB, RHPDB, ORACHKDB] to 19.30.0.0.0: DbHome is OraDB19000_home9
Status: FAILED
Created: February 26, 2026 3:01:14 PM CET
Result: One or more pre-checks failed for [ORACHK, DB]
Node Name
---------------
dbioda01
Pre-Check Status Comments
------------------------------ -------- --------------------------------------
__DB__
Validate data corruption in Failed DCS-10315 - Patch described in My
patching Oracle Support Note KB867473 must be
applied.
Validate DB Home ID Success Validated DB Home ID:
57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb
Validate patching tag Success Validated patching tag: 19.30.0.0.0.
Is system provisioned Success Verified system is provisioned
Validate minimum agent version Success Validated minimum agent version
Is GI upgraded Success Validated GI is upgraded
Validate available space for Success Validated free space required under
db /u01
Validate there is usable Success Successfully validated Oracle Base
space under oracle base usable space
Validate glogin.sql file Success Successfully verified glogin.sql
won't break patching
Validate dbHomesOnACFS Success User has configured disk group for
configured Database homes on ACFS
Validate Oracle base Success Successfully validated Oracle Base
Is DB clone available Success Successfully validated clone file
exists
Validate command execution Success Validated command execution
__RHPDB__
Evaluate DBHome patching with Success Successfully validated updating
RHP dbhome with RHP. and local patching
is possible
Validate command execution Success Validated command execution
__ORACHK__
Running orachk Failed DCS-10702 - ORAchk validation failed:
.
Validate command execution Success Validated command execution
Verify the Fast Recovery Area Failed AHF-2929: FRA space management
(FRA) has reclaimable space problem file types are present
without an RMAN backup completion
within the last 7 days
This view doesn’t look updated, and the ODA documentation tells us that updating the DB home will need to be forced, let’s do that:
odacli update-dbhome -i 57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb -v 19.30.0.0.0 --force
sleep 600; odacli describe-job -i "bd511055-7a35-45b4-b9f2-3a003c7ecb31"
Job details
----------------------------------------------------------------
ID: bd511055-7a35-45b4-b9f2-3a003c7ecb31
Description: DB Home Patching to 19.30.0.0.0: Home ID is 57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb
Status: Success
Created: February 26, 2026 15:08:46 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Creating wallet for DB Client February 26, 2026 15:09:33 CET February 26, 2026 15:09:33 CET Success
Patch databases by RHP - [CPROD19] February 26, 2026 15:09:33 CET February 26, 2026 15:16:44 CET Success
Updating database metadata February 26, 2026 15:16:44 CET February 26, 2026 15:16:44 CET Success
Upgrade pwfile to 12.2 February 26, 2026 15:16:44 CET February 26, 2026 15:16:47 CET Success
Set log_archive_dest for Database February 26, 2026 15:16:47 CET February 26, 2026 15:16:50 CET Success
Populate PDB metadata February 26, 2026 15:16:51 CET February 26, 2026 15:16:52 CET Success
Generating and saving BOM February 26, 2026 15:16:52 CET February 26, 2026 15:17:33 CET Success
TDE parameter update February 26, 2026 15:18:06 CET February 26, 2026 15:18:06 CET Success
Everything is now OK.
Let’s check the DB homes and databases:
odacli list-dbhomes
ID Name DB Version DB Edition Home Location Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
d3b5fa9c-ad85-46c3-b11a-cd264978b653 OraDB19000_home10 19.30.0.0.260120 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_10 CONFIGURED
57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb OraDB19000_home9 19.29.0.0.251021 EE /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_9 CONFIGURED
odacli list-databases
ID DB Name DB Type DB Version CDB Class Edition Shape Storage Status DB Home ID
---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ----------------------------------------
976a80f2-4653-469f-8cd4-ddc1a21aff51 CPROD19 SI 19.30.0.0.260120 true OLTP EE odb8 ASM CONFIGURED d3b5fa9c-ad85-46c3-b11a-cd264978b653
Let’s now remove the old DB home. Note that DB homes are not protected by ODABR, I would recommend doing a backup before removing an old DB home:
tar czf /backup/`hostname -s`_dbhome_9.tgz /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_9
odacli delete-dbhome -i 57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb
sleep 40 ; odacli describe-job -i 4589a4d7-6986-4e16-818c-78d585f44443
Job details
----------------------------------------------------------------
ID: 4589a4d7-6986-4e16-818c-78d585f44443
Description: Database Home OraDB19000_home9 Deletion with ID 57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb
Status: Success
Created: February 26, 2026 15:24:44 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Setting up SSH equivalence February 26, 2026 15:24:44 CET February 26, 2026 15:24:44 CET Success
Setting up SSH equivalence February 26, 2026 15:24:44 CET February 26, 2026 15:24:44 CET Success
Validate DB Home February 26, 2026 15:24:44 CET February 26, 2026 15:24:44 CET Success
57c0dd7f-dcf4-4a38-9e79-4bf8c78e81bb
for deletion
Deleting DB Home by RHP February 26, 2026 15:24:45 CET February 26, 2026 15:25:21 CET Success
Cleanse the old patches
Let’s remove the previous patch from the repository:
odacli cleanup-patchrepo -comp all -v 19.29.0.0.0
odacli describe-job -i "a9e29414-8f12-4b55-a6d4-9ad82e9a4c74"
Job details
----------------------------------------------------------------
ID: a9e29414-8f12-4b55-a6d4-9ad82e9a4c74
Description: Cleanup patchrepos
Status: Success
Created: February 26, 2026 15:29:46 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Cleanup Repository February 26, 2026 15:29:46 CET February 26, 2026 15:29:47 CET Success
Cleanup old ASR rpm February 26, 2026 15:29:47 CET February 26, 2026 15:29:47 CET Success
Old GI binaries are still using space in /u01, it’s better to remove them manually:
du -hs /u01/app/19.*
14G /u01/app/19.29.0.0
14G /u01/app/19.30.0.0
rm -rf /u01/app/19.29.0.0
I would recommend doing a reboot to check if everything run fine. But let’s first check the components:
odacli describe-component
System Version
--------------
19.30.0.0.0
System Node Name
----------------
dbioda01
Local System Version
--------------------
19.30.0.0.0
Component Installed Version Available Version
---------------------------------------- -------------------- --------------------
OAK 19.30.0.0.0 up-to-date
GI 19.30.0.0.260120 up-to-date
DB {
OraDB19000_home10 19.30.0.0.260120 up-to-date
[CPROD19]
}
DCSCONTROLLER 19.30.0.0.0 up-to-date
DCSCLI 19.30.0.0.0 up-to-date
DCSAGENT 19.30.0.0.0 up-to-date
DCSADMIN 19.30.0.0.0 up-to-date
OS 8.10 up-to-date
ILOM 5.1.5.29.r167438 up-to-date
BIOS 52170100 up-to-date
LOCAL CONTROLLER FIRMWARE {
[c4] 8000D9AB up-to-date
}
SHARED CONTROLLER FIRMWARE {
[c0, c1] VDV1RL06 up-to-date
}
LOCAL DISK FIRMWARE {
[c2d0, c2d1] XC311132 up-to-date
}
HMP 2.4.10.1.600 up-to-date
reboot
...
ps -ef | grep pmon
grid 8292 1 0 15:37 ? 00:00:00 asm_pmon_+ASM1
grid 11539 1 0 15:37 ? 00:00:00 apx_pmon_+APX1
oracle 20494 1 0 15:38 ? 00:00:00 ora_pmon_CPROD19
root 23559 23363 0 15:39 pts/1 00:00:00 grep --color=auto pmon
Everything is fine.
Post-patching tasksDont’ forget these post-patching tasks:
- remove the ODABR snapshots
- add your additional RPMs
- put back your profile scripts for grid and oracle users
- check if monitoring still works
/opt/odabr/odabr infosnap
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
odabr - ODA node Backup Restore - Version: 2.0.2-06
Copyright 2013, 2025, Oracle and/or its affiliates.
--------------------------------------------------------
RACPack, Cloud Innovation and Solution Engineering Team
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
LVM snap name status COW Size Data%
------------- ---------- ---------- ------
root_snap active 30.00 GiB 6.08%
opt_snap active 70.00 GiB 11.41%
u01_snap active 60.00 GiB 25.30%
/opt/odabr/odabr delsnap
INFO: 2026-02-26 15:39:47: Please check the logfile '/opt/odabr/out/log/odabr_23962.log' for more details
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
odabr - ODA node Backup Restore - Version: 2.0.2-06
Copyright 2013, 2025, Oracle and/or its affiliates.
--------------------------------------------------------
RACPack, Cloud Innovation and Solution Engineering Team
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
INFO: 2026-02-26 15:39:47: Removing LVM snapshots
INFO: 2026-02-26 15:39:47: ...removing LVM snapshot for 'opt'
SUCCESS: 2026-02-26 15:39:48: ...snapshot for 'opt' removed successfully
INFO: 2026-02-26 15:39:48: ...removing LVM snapshot for 'u01'
SUCCESS: 2026-02-26 15:39:48: ...snapshot for 'u01' removed successfully
INFO: 2026-02-26 15:39:48: ...removing LVM snapshot for 'root'
SUCCESS: 2026-02-26 15:39:48: ...snapshot for 'root' removed successfully
SUCCESS: 2026-02-26 15:39:48: LVM snapshots removed successfully
Patching the DB System
If you use DB Systems on your ODA, meaning that some of your databases are running in dedicated VMs, you will need to apply the patch inside each DB System. If you’re using 26ai, you first need to register the new clones in the repository before connecting to your DB System:
odacli update-repository -f /opt/dbi/odacli-dcs-23.26.1.0.0-260211-GI-23.26.1.0.zip
sleep 30 ; odacli describe-job -i 8612ef6a-7df4-419d-8d05-176e11126f48
Job details
----------------------------------------------------------------
ID: 8612ef6a-7df4-419d-8d05-176e11126f48
Description: Repository Update
Status: Success
Created: February 26, 2026 15:44:00 CET
Message: /opt/dbi/odacli-dcs-23.26.1.0.0-260211-GI-23.26.1.0.zip
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Unzip bundle February 26, 2026 15:44:00 CET February 26, 2026 15:44:13 CET Success
odacli update-repository -f /opt/dbi/odacli-dcs-23.26.1.0.0-260211-DB-23.26.1.0.zip
sleep 30 ; odacli describe-job -i 9dd624f2-9048-4897-b63b-400b955c803c
Job details
----------------------------------------------------------------
ID: 9dd624f2-9048-4897-b63b-400b955c803c
Description: Repository Update
Status: Success
Created: February 26, 2026 15:45:12 CET
Message: /opt/dbi/odacli-dcs-23.26.1.0.0-260211-DB-23.26.1.0.zip
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Unzip bundle February 26, 2026 15:45:12 CET February 26, 2026 15:45:40 CET Success
odacli update-repository -f /opt/dbi/oda-sm-23.26.1.0.0-260211-server.zip
sleep 20 ; odacli describe-job -i 6a24023f-de2e-4481-b9c4-d8511d54be48
Job details
----------------------------------------------------------------
ID: 6a24023f-de2e-4481-b9c4-d8511d54be48
Description: Repository Update
Status: Success
Created: February 26, 2026 15:59:07 CET
Message: /opt/dbi/oda-sm-23.26.1.0.0-260211-server.zip
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Unzip bundle February 26, 2026 15:59:08 CET February 26, 2026 15:59:30 CET Success
odacli list-availablepatches
-------------------- ------------------------- ------------------------- ------------------------------
ODA Release Version Supported DB Versions Available DB Versions Supported Platforms
-------------------- ------------------------- ------------------------- ------------------------------
19.30.0.0.0 23.26.1.0.0 23.26.1.0.0 DB System
21.8.0.0.221018 Clone not available DB System
19.30.0.0.260120 19.30.0.0.260120 DB System, Bare Metal
Applying the patch is done the same way you’ve done it on bare metal, but here you need to specify the 23.26.1.0.0 version:
ssh dbs-04-tst
odacli update-dcsadmin -v 23.26.1.0.0
sleep 60 ; odacli describe-job -i 4b83ab57-ccb1-4f9f-8c70-572b45ada49b
Job details
----------------------------------------------------------------
ID: 4b83ab57-ccb1-4f9f-8c70-572b45ada49b
Description: DcsAdmin patching to 23.26.1.0.0
Status: Success
Created: March 05, 2026 10:07:44 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Dcs-admin upgrade March 05, 2026 10:07:45 CET March 05, 2026 10:07:59 CET Success
Ping DCS Admin March 05, 2026 10:07:59 CET March 05, 2026 10:09:07 CET Success
sleep 30 ; odacli update-dcscomponents -v 23.26.1.0.0
{
"jobId" : "cb674b3e-d6eb-4351-be39-0f19b8c56f9d",
"status" : "Success",
"message" : "Update-dcscomponents is successful on all the node(s): DCS-Agent shutdown is successful. MySQL upgrade is successful. Metadata schema update is done. Script '/opt/oracle/dcs/log/jobfiles/cb674b3e-d6eb-4351-be39-0f19b8c56f9d/apply_metadata_change.sh' ran successfully. dcsagent RPM upgrade is successful. dcscli RPM upgrade is successful. dcscontroller RPM upgrade is successful. ahf RPM upgrade is successful. Successfully reset the Keystore password. HAMI RPM is already updated. Removed old Libs Successfully ran setupAgentAuth.sh ",
"reports" : null,
"createTimestamp" : "March 05, 2026 10:10:13 AM CET",
"description" : "Update-dcscomponents job completed and is not part of Agent job list",
"updatedTime" : "March 05, 2026 10:18:38 AM CET",
"jobType" : null,
"externalRequestId" : null,
"action" : null
}
odacli describe-admin-job -i cb674b3e-d6eb-4351-be39-0f19b8c56f9d
odacli: 'describe-admin-job' is not an odacli command.
usage: odacli [-h/--help]
<category> [-h/--help]
<operation> [-h/--help]
<command> [-h/--help]
<command> [<args>]
Note that there is no describe-admin-job feature on DB Systems.
odacli create-prepatchreport -sc -v 23.26.1.0.0
sleep 20 ; odacli describe-prepatchreport -i 1b104d06-bc0c-45b8-ab25-b5b6a102a857
Prepatch Report
------------------------------------------------------------------------
Job ID: 1b104d06-bc0c-45b8-ab25-b5b6a102a857
Description: Patch pre-checks for [OS, ORACHKSERVER, SERVER] to 23.26.1.0.0
Status: SUCCESS
Created: March 05, 2026 10:59:34 CET
Result: All pre-checks succeeded
Node Name
---------------
dbs-04-tst
Pre-Check Status Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions Success Validated minimum supported versions.
Validate patching tag Success Validated patching tag: 23.26.1.0.0.
Is patch location available Success Patch location is available.
Verify All OS patches Success No dependencies found for RPMs being
removed, updated and installed. Check
/opt/oracle/dcs/log/jobfiles/
dnfdryrunout_2026-03-05_10-59-
50.0718_832.log file for more details
Validate there is usable Success Successfully validated
space under repo volume /opt/oracle/dcs/repo usable space
Validate command execution Success Validated command execution
__ORACHK__
Running orachk Success Successfully ran Orachk
Validate command execution Success Validated command execution
__SERVER__
Validate local patching Success Successfully validated server local
patching
Validate all KVM ACFS Success All KVM ACFS resources are running
resources are running
Validate DB System VM states Success All DB System VMs states are expected
Enable support for Multi-DB Success No need to convert the DB System
Validate DB System AFD state Success AFD is not configured
Validate there is usable Success Successfully validated
space under repo volume /opt/oracle/dcs/repo usable space
Validate command execution Success Validated command execution
odacli update-servercomponents -v 23.26.1.0.0
The DB System will reboot.
odacli describe-job -i 2b4da73a-7f64-48e0-af76-a1d687a0169f
Job details
----------------------------------------------------------------
ID: 2b4da73a-7f64-48e0-af76-a1d687a0169f
Description: Server Patching to 23.26.1.0.0
Status: Success
Created: March 05, 2026 11:04:19 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Deactivate Unit[dnf-makecache.timer] March 05, 2026 11:04:21 CET March 05, 2026 11:04:21 CET Success
Validating GI user metadata March 05, 2026 11:04:21 CET March 05, 2026 11:04:21 CET Success
Deactivate Unit[kdump.service] March 05, 2026 11:04:22 CET March 05, 2026 11:04:23 CET Success
Modify DBVM udev rules March 05, 2026 11:04:23 CET March 05, 2026 11:04:34 CET Success
Creating local repository March 05, 2026 11:04:34 CET March 05, 2026 11:04:37 CET Success
OSPatchBaseRepo
Updating versionlock plugin March 05, 2026 11:04:37 CET March 05, 2026 11:04:41 CET Success
Applying OS Patches March 05, 2026 11:04:41 CET March 05, 2026 11:07:33 CET Success
Creating local repository HMPPatchRepo March 05, 2026 11:07:34 CET March 05, 2026 11:07:34 CET Success
Applying HMP Patches March 05, 2026 11:07:35 CET March 05, 2026 11:07:38 CET Success
Patch location validation March 05, 2026 11:07:39 CET March 05, 2026 11:07:39 CET Success
Setting SELinux mode March 05, 2026 11:07:39 CET March 05, 2026 11:07:39 CET Success
Oda-hw-mgmt upgrade March 05, 2026 11:07:40 CET March 05, 2026 11:08:08 CET Success
Installing SQLcl software March 05, 2026 11:08:08 CET March 05, 2026 11:08:13 CET Success
Cleanup JRE Home March 05, 2026 11:08:14 CET March 05, 2026 11:08:14 CET Success
Generating and saving BOM March 05, 2026 11:08:17 CET March 05, 2026 11:08:24 CET Success
Update System full patch version March 05, 2026 11:08:17 CET March 05, 2026 11:08:17 CET Success
Update System rebootless patch version March 05, 2026 11:08:17 CET March 05, 2026 11:08:17 CET Success
PreRebootNode Actions March 05, 2026 11:08:24 CET March 05, 2026 11:08:25 CET Success
Reboot Node March 05, 2026 11:08:25 CET March 05, 2026 11:09:59 CET Success
odacli create-prepatchreport -gi -v 23.26.1.0.0
sleep 240 ; odacli describe-prepatchreport -i dd5d216b-d1bc-44cf-bcf8-381da0729469
Prepatch Report
------------------------------------------------------------------------
Job ID: dd5d216b-d1bc-44cf-bcf8-381da0729469
Description: Patch pre-checks for [RHPGI, GI] to 23.26.1.0.0
Status: SUCCESS
Created: March 05, 2026 11:13:21 CET
Result: All pre-checks succeeded
Node Name
---------------
dbs-04-tst
Pre-Check Status Comments
------------------------------ -------- --------------------------------------
__RHPGI__
Validate available space Success Validated free space under /u01
Evaluate GI patching Success Successfully validated GI patching
Validate there is usable Success Successfully validated
space under repo volume /opt/oracle/dcs/repo usable space
Validate command execution Success Validated command execution
__GI__
Validate GI metadata Success Successfully validated GI metadata
Validate supported GI versions Success Successfully validated minimum version
Validate there is usable Success Successfully validated
space under repo volume /opt/oracle/dcs/repo usable space
Is clusterware running Success Clusterware is running
Validate patching tag Success Validated patching tag: 23.26.1.0.0.
Is system provisioned Success Verified system is provisioned
Validate BM versions Success Validated BM server components
versions
Validate kernel log level Success Successfully validated the OS log
level
Validate Central Inventory Success oraInventory validation passed
Validate patching locks Success Validated patching locks
Validate clones location exist Success Validated clones location
Validate command execution Success Validated command execution
odacli update-gihome -v 23.26.1.0.0
sleep 600 ; odacli describe-job -i c93f84fc-5cb2-41bb-9f23-f7ce22b9f5de
Job details
----------------------------------------------------------------
ID: c93f84fc-5cb2-41bb-9f23-f7ce22b9f5de
Description: Patch GI with RHP to 23.26.1.0.0
Status: Success
Created: March 05, 2026 11:22:47 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Patch GI with RHP to 23.26.1.0.0 March 05, 2026 11:22:59 CET March 05, 2026 11:27:54 CET Success
Starting the clusterware March 05, 2026 11:22:59 CET March 05, 2026 11:22:59 CET Success
Creating GI home directories March 05, 2026 11:23:01 CET March 05, 2026 11:23:01 CET Success
Extract GI clone March 05, 2026 11:23:01 CET March 05, 2026 11:23:01 CET Success
Provisioning Software Only GI with RHP March 05, 2026 11:23:01 CET March 05, 2026 11:23:01 CET Success
Registering image March 05, 2026 11:23:01 CET March 05, 2026 11:23:01 CET Success
Registering image March 05, 2026 11:23:01 CET March 05, 2026 11:23:01 CET Success
Registering working copy March 05, 2026 11:23:01 CET March 05, 2026 11:23:01 CET Success
Patch GI with RHP March 05, 2026 11:23:47 CET March 05, 2026 11:26:58 CET Success
Set CRS ping target March 05, 2026 11:26:58 CET March 05, 2026 11:26:59 CET Success
Updating .bashrc March 05, 2026 11:26:59 CET March 05, 2026 11:26:59 CET Success
Updating GI home metadata March 05, 2026 11:26:59 CET March 05, 2026 11:27:00 CET Success
Updating GI home version March 05, 2026 11:27:00 CET March 05, 2026 11:27:04 CET Success
Updating All DBHome version March 05, 2026 11:27:04 CET March 05, 2026 11:27:08 CET Success
Patch DB System on BM March 05, 2026 11:27:48 CET March 05, 2026 11:27:54 CET Success
Starting the clusterware March 05, 2026 11:27:48 CET March 05, 2026 11:27:48 CET Success
odacli list-dbhomes
ID Name DB Version DB Edition Home Location Status
---------------------------------------- -------------------- -------------------- ---------- -------------------------------------------------------- ----------
9116603b-3b5e-4e92-aa63-baad8ae1d6a8 OraDB23000_home1 23.26.0.0.0 EE /u01/app/oracle/product/23.0.0.0/dbhome_1 CONFIGURED
odacli create-prepatchreport -d -i 9116603b-3b5e-4e92-aa63-baad8ae1d6a8 -v 23.26.1.0.0
sleep 600 ; odacli describe-prepatchreport -i bb16e390-3dcb-4ea0-b8c5-0c22f38ba271
odacli describe-prepatchreport -i bb16e390-3dcb-4ea0-b8c5-0c22f38ba271
Prepatch Report
------------------------------------------------------------------------
Job ID: bb16e390-3dcb-4ea0-b8c5-0c22f38ba271
Description: Patch pre-checks for [DB, RHPDB, ORACHKDB] to 23.26.1.0.0: DbHome is OraDB23000_home1
Status: FAILED
Created: March 05, 2026 11:59:29 CET
Result: One or more pre-checks failed for [ORACHK]
Node Name
---------------
dbs-04-tst
Pre-Check Status Comments
------------------------------ -------- --------------------------------------
__DB__
Validate DB Home ID Success Validated DB Home ID:
9116603b-3b5e-4e92-aa63-baad8ae1d6a8
Validate patching tag Success Validated patching tag: 23.26.1.0.0.
Is system provisioned Success Verified system is provisioned
Validate minimum agent version Success Validated minimum agent version
Is GI upgraded Success Validated GI is upgraded
Validate available space for Success Validated free space required under
db /u01
Validate there is usable Success Successfully validated Oracle Base
space under oracle base usable space
Validate glogin.sql file Success Successfully verified glogin.sql
won't break patching
Is DB clone available Success Successfully validated clone file
exists
Validate command execution Success Validated command execution
__RHPDB__
Evaluate DBHome patching with Success Successfully validated updating
RHP dbhome with RHP. and local patching
is possible
Validate command execution Success Validated command execution
__ORACHK__
Running orachk Failed DCS-10702 - ORAchk validation failed:
.
Validate command execution Success Validated command execution
Verify the Fast Recovery Area Failed AHF-2929: FRA space management
(FRA) has reclaimable space problem file types are present
without an RMAN backup completion
within the last 7 days
The failure is similar to the one I had when patching the bare metal DB home, but I can ignore this and update the DB home with the force option:
odacli update-dbhome -i 9116603b-3b5e-4e92-aa63-baad8ae1d6a8 -v 23.26.1.0.0 -f
sleep 1200 ; odacli describe-job -i 4fc89556-2f7c-4e5b-a12f-55e32d7e748a
Job details
----------------------------------------------------------------
ID: 4fc89556-2f7c-4e5b-a12f-55e32d7e748a
Description: DB Home Patching to 23.26.1.0.0: Home ID is 9116603b-3b5e-4e92-aa63-baad8ae1d6a8
Status: Success
Created: March 05, 2026 13:36:42 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Creating wallet for DB Client March 05, 2026 13:37:21 CET March 05, 2026 13:37:21 CET Success
Patch databases by RHP - [CTEST26] March 05, 2026 13:37:21 CET March 05, 2026 13:54:26 CET Success
Updating database metadata March 05, 2026 13:54:26 CET March 05, 2026 13:54:27 CET Success
Upgrade pwfile to 12.2 March 05, 2026 13:54:27 CET March 05, 2026 13:54:32 CET Success
Set log_archive_dest for Database March 05, 2026 13:54:32 CET March 05, 2026 13:54:37 CET Success
Populate PDB metadata March 05, 2026 13:54:38 CET March 05, 2026 13:54:39 CET Success
Generating and saving BOM March 05, 2026 13:54:39 CET March 05, 2026 13:55:08 CET Success
TDE parameter update March 05, 2026 13:55:44 CET March 05, 2026 13:55:44 CET Success
odacli list-databases
ID DB Name DB Type DB Version CDB Class Edition Shape Storage Status DB Home ID
---------------------------------------- ---------- -------- -------------------- ------- -------- -------- -------- -------- ------------ ----------------------------------------
276bf458-db09-4c9a-9cd9-a821e5274fb0 CTEST26 SI 23.26.1.0.0 true OLTP EE odb2 ASM CONFIGURED 9c51039d-ccba-4508-b879-a81b8c18d46a
odacli delete-dbhome -i 9116603b-3b5e-4e92-aa63-baad8ae1d6a8
sleep 100 ; odacli describe-job -i 0994b96e-e174-4776-8699-f179c1d89af0
Job details
----------------------------------------------------------------
ID: 0994b96e-e174-4776-8699-f179c1d89af0
Description: Database Home OraDB23000_home1 Deletion with ID 9116603b-3b5e-4e92-aa63-baad8ae1d6a8
Status: Success
Created: March 05, 2026 13:58:36 CET
Message:
Task Name Start Time End Time Status
---------------------------------------- ---------------------------------------- ---------------------------------------- ----------------
Setting up SSH equivalence March 05, 2026 13:58:36 CET March 05, 2026 13:58:36 CET Success
Setting up SSH equivalence March 05, 2026 13:58:36 CET March 05, 2026 13:58:37 CET Success
Validate DB Home March 05, 2026 13:58:36 CET March 05, 2026 13:58:36 CET Success
9116603b-3b5e-4e92-aa63-baad8ae1d6a8
for deletion
Deleting DB Home by RHP March 05, 2026 13:58:38 CET March 05, 2026 13:59:15 CET Success
odacli describe-component
System Version
--------------
23.26.1.0.0
System Node Name
----------------
dbs-04-tst
Local System Version
--------------------
23.26.1.0.0
Component Installed Version Available Version
---------------------------------------- -------------------- --------------------
OAK 23.26.1.0.0 up-to-date
GI 23.26.1.0.0 up-to-date
DB {
OraDB23000_home2 23.26.1.0.0 up-to-date
[CTEST26]
}
DCSCONTROLLER 23.26.1.0.0 up-to-date
DCSCLI 23.26.1.0.0 up-to-date
DCSAGENT 23.26.1.0.0 up-to-date
DCSADMIN 23.26.1.0.0 up-to-date
OS 8.10 up-to-date
Finally, let’s remove obsolete GI binaries:
du -hs /u01/app/23.26.*
3.9G /u01/app/23.26.0.0
3.6G /u01/app/23.26.1.0
rm -rf /u01/app/23.26.0.0/
Don’t forget to apply this procedure to the other DB Systems.
ConclusionApplying this patch is OK, as soon as everything is clean and under control. When patching, only use the force option when you’re sure that you know what you’re doing. As always, patching an ODA with DB Systems can take quite a big amount of time, mainly depending on the number of DB Systems.
L’article How to patch your ODA to 19.30? est apparu en premier sur dbi Blog.


